Project acronym BANDWIDTH
Project The cost of limited communication bandwidth in distributed computing
Researcher (PI) Keren CENSOR-HILLEL
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Distributed systems underlie many modern technologies, a prime example being the Internet. The ever-increasing abundance of distributed systems necessitates their design and usage to be backed by strong theoretical foundations.
A major challenge that distributed systems face is the lack of a central authority, which brings many aspects of uncertainty into the environment, in the form of unknown network topology or unpredictable dynamic behavior. A practical restriction of distributed systems, which is at the heart of this proposal, is the limited bandwidth available for communication between the network components.
A central family of distributed tasks is that of local tasks, which are informally described as tasks which are possible to solve by sending information through only a relatively small number of hops. A cornerstone example is the need to break symmetry and provide a better utilization of resources, which can be obtained by the task of producing a valid coloring of the nodes given some small number of colors. Amazingly, there are still huge gaps between the known upper and lower bounds for the complexity of many local tasks. This holds even if one allows powerful assumptions of unlimited bandwidth. While some known algorithms indeed use small messages, the complexity gaps are even larger compared to the unlimited bandwidth case. This is not a mere coincidence, and in fact the existing theoretical infrastructure is provably incapable of
giving stronger lower bounds for many local tasks under limited bandwidth.
This proposal zooms in on this crucial blind spot in the current literature on the theory of distributed computing, namely, the study of local tasks under limited bandwidth. The goal of this research is to produce fast algorithms for fundamental distributed local tasks under restricted bandwidth, as well as understand their limitations by providing lower bounds.
Summary
Distributed systems underlie many modern technologies, a prime example being the Internet. The ever-increasing abundance of distributed systems necessitates their design and usage to be backed by strong theoretical foundations.
A major challenge that distributed systems face is the lack of a central authority, which brings many aspects of uncertainty into the environment, in the form of unknown network topology or unpredictable dynamic behavior. A practical restriction of distributed systems, which is at the heart of this proposal, is the limited bandwidth available for communication between the network components.
A central family of distributed tasks is that of local tasks, which are informally described as tasks which are possible to solve by sending information through only a relatively small number of hops. A cornerstone example is the need to break symmetry and provide a better utilization of resources, which can be obtained by the task of producing a valid coloring of the nodes given some small number of colors. Amazingly, there are still huge gaps between the known upper and lower bounds for the complexity of many local tasks. This holds even if one allows powerful assumptions of unlimited bandwidth. While some known algorithms indeed use small messages, the complexity gaps are even larger compared to the unlimited bandwidth case. This is not a mere coincidence, and in fact the existing theoretical infrastructure is provably incapable of
giving stronger lower bounds for many local tasks under limited bandwidth.
This proposal zooms in on this crucial blind spot in the current literature on the theory of distributed computing, namely, the study of local tasks under limited bandwidth. The goal of this research is to produce fast algorithms for fundamental distributed local tasks under restricted bandwidth, as well as understand their limitations by providing lower bounds.
Max ERC Funding
1 486 480 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym BeyondA1
Project Set theory beyond the first uncountable cardinal
Researcher (PI) Assaf Shmuel Rinot
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah.
While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions.
A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that.
To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success.
Summary
We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah.
While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions.
A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that.
To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success.
Max ERC Funding
1 362 500 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym BirNonArchGeom
Project Birational and non-archimedean geometries
Researcher (PI) Michael TEMKIN
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary Resolution of singularities is one of classical, central and difficult areas of algebraic geometry, with a centennial history of intensive research and contributions of such great names as Zariski, Hironaka and Abhyankar. Nowadays, desingularization of schemes of characteristic zero is very well understood, while semistable reduction of morphisms and desingularization in positive characteristic are still waiting for major breakthroughs. In addition to the classical techniques with their triumph in characteristic zero, modern resolution of singularities includes de Jong's method of alterations, toroidal methods, formal analytic and non-archimedean methods, etc.
The aim of the proposed research is to study nearly all directions in resolution of singularities and semistable reduction, as well as the wild ramification phenomena, which are probably the main obstacle to transfer methods from characteristic zero to positive characteristic.
The methods of algebraic and non-archimedean geometries are intertwined in the proposal, though algebraic geometry is somewhat dominating, especially due to the new stack-theoretic techniques. It seems very probable that increasing the symbiosis between birational and non-archimedean geometries will be one of by-products of this research.
Summary
Resolution of singularities is one of classical, central and difficult areas of algebraic geometry, with a centennial history of intensive research and contributions of such great names as Zariski, Hironaka and Abhyankar. Nowadays, desingularization of schemes of characteristic zero is very well understood, while semistable reduction of morphisms and desingularization in positive characteristic are still waiting for major breakthroughs. In addition to the classical techniques with their triumph in characteristic zero, modern resolution of singularities includes de Jong's method of alterations, toroidal methods, formal analytic and non-archimedean methods, etc.
The aim of the proposed research is to study nearly all directions in resolution of singularities and semistable reduction, as well as the wild ramification phenomena, which are probably the main obstacle to transfer methods from characteristic zero to positive characteristic.
The methods of algebraic and non-archimedean geometries are intertwined in the proposal, though algebraic geometry is somewhat dominating, especially due to the new stack-theoretic techniques. It seems very probable that increasing the symbiosis between birational and non-archimedean geometries will be one of by-products of this research.
Max ERC Funding
1 365 600 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym CLAUSTRUM
Project The Claustrum: A Circuit Hub for Attention
Researcher (PI) Amihai CITRI
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), LS5, ERC-2017-COG
Summary Our senses face a constant barrage of information. Hence, understanding how our brain enables us to attend to relevant stimuli, while ignoring distractions, is of increasing biomedical importance. Recently, I discovered that the claustrum, a multi-sensory hub and recipient of extensive neuromodulatory input, enables resilience to distraction.
In my ERC project, I will explore the mechanisms underlying claustral mediation of resilience to distraction and develop novel approaches for assessing and modulating attention in mice, with implications for humans. Transgenic mouse models that I identified as enabling selective access to claustral neurons overcome its limiting anatomy, making the claustrum accessible to functional investigation. Using this novel genetic access, I obtained preliminary results strongly suggesting that the claustrum functions to filter distractions by adjusting cortical sensory gain.
My specific aims are: 1) To delineate the mechanisms whereby the claustrum achieves sensory gain control, by applying in-vivo cell-attached, multi-unit and fiber photometry recordings from claustral and cortical neurons during attention-demanding tasks. 2) To discriminate between the functions of the claustrum in multi-sensory integration and implementation of attention strategies, by employing multi-sensory behavioral paradigms while modulating claustral function. 3) To develop validated complementary physiological and behavioral protocols for adjusting claustral mediation of attention via neuromodulation.
This study is unique in its focus and aims: it will provide a stringent neurophysiological framework for defining a key mechanism underlying cognitive concepts of attention, and establish a novel platform for studying the function of the claustrum and manipulating its activity. The project is designed to achieve breakthroughs of fundamental nature and potentially lead to diagnostic and therapeutic advances relevant to attention disorders.
Summary
Our senses face a constant barrage of information. Hence, understanding how our brain enables us to attend to relevant stimuli, while ignoring distractions, is of increasing biomedical importance. Recently, I discovered that the claustrum, a multi-sensory hub and recipient of extensive neuromodulatory input, enables resilience to distraction.
In my ERC project, I will explore the mechanisms underlying claustral mediation of resilience to distraction and develop novel approaches for assessing and modulating attention in mice, with implications for humans. Transgenic mouse models that I identified as enabling selective access to claustral neurons overcome its limiting anatomy, making the claustrum accessible to functional investigation. Using this novel genetic access, I obtained preliminary results strongly suggesting that the claustrum functions to filter distractions by adjusting cortical sensory gain.
My specific aims are: 1) To delineate the mechanisms whereby the claustrum achieves sensory gain control, by applying in-vivo cell-attached, multi-unit and fiber photometry recordings from claustral and cortical neurons during attention-demanding tasks. 2) To discriminate between the functions of the claustrum in multi-sensory integration and implementation of attention strategies, by employing multi-sensory behavioral paradigms while modulating claustral function. 3) To develop validated complementary physiological and behavioral protocols for adjusting claustral mediation of attention via neuromodulation.
This study is unique in its focus and aims: it will provide a stringent neurophysiological framework for defining a key mechanism underlying cognitive concepts of attention, and establish a novel platform for studying the function of the claustrum and manipulating its activity. The project is designed to achieve breakthroughs of fundamental nature and potentially lead to diagnostic and therapeutic advances relevant to attention disorders.
Max ERC Funding
1 995 000 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym COFBMIX
Project Cortical feedback in figure background segregation of odors.
Researcher (PI) Dan ROKNI
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), LS5, ERC-2017-STG
Summary A key question in neuroscience is how information is processed by sensory systems to guide behavior. Most of our knowledge about sensory processing is based on presentation of simple isolated stimuli and recording corresponding neural activity in relevant brain areas. Yet sensory stimuli in real life are never isolated and typically not simple. How the brain processes complex stimuli, simultaneously arising from multiple objects is unknown. Our daily experience (as well as well-controlled experiments) shows that only parts of a complex sensory scene can be perceived - we cannot listen to more than one speaker in a party. Importantly, one can easily choose what is important and should be processed and what can be ignored as background. These observations lead to the prevalent hypothesis that feedback projections from ‘higher’ brain areas to more peripheral sensory areas are involved in processing of complex stimuli. However experimental analysis of signals conveyed by feedback projections in behaving animals is scarce. The nature of these signals and how they relate to behavior is unknown.
Here I propose a cutting edge approach to directly record feedback signals in the olfactory system of behaving mice. We will use chronically implanted electrodes to record the modulation of olfactory bulb (OB) principal neurons by task related context. Additionally, we will record from piriform cortical (PC) neurons that project back to the OB. These will be tagged with channelrhodopsin-2 and identified by light sensitivity. Finally, we will express the spectrally distinct Ca++ indicators GCaMP6 and RCaMP2 in PC neurons and in olfactory sensory neurons, respectively, and use 2-photon microscopy to analyze the spatio-temporal relationship between feedforward and feedback inputs in the OB. This comprehensive approach will provide an explanation of how feedforward and feedback inputs are integrated to process complex stimuli.
Summary
A key question in neuroscience is how information is processed by sensory systems to guide behavior. Most of our knowledge about sensory processing is based on presentation of simple isolated stimuli and recording corresponding neural activity in relevant brain areas. Yet sensory stimuli in real life are never isolated and typically not simple. How the brain processes complex stimuli, simultaneously arising from multiple objects is unknown. Our daily experience (as well as well-controlled experiments) shows that only parts of a complex sensory scene can be perceived - we cannot listen to more than one speaker in a party. Importantly, one can easily choose what is important and should be processed and what can be ignored as background. These observations lead to the prevalent hypothesis that feedback projections from ‘higher’ brain areas to more peripheral sensory areas are involved in processing of complex stimuli. However experimental analysis of signals conveyed by feedback projections in behaving animals is scarce. The nature of these signals and how they relate to behavior is unknown.
Here I propose a cutting edge approach to directly record feedback signals in the olfactory system of behaving mice. We will use chronically implanted electrodes to record the modulation of olfactory bulb (OB) principal neurons by task related context. Additionally, we will record from piriform cortical (PC) neurons that project back to the OB. These will be tagged with channelrhodopsin-2 and identified by light sensitivity. Finally, we will express the spectrally distinct Ca++ indicators GCaMP6 and RCaMP2 in PC neurons and in olfactory sensory neurons, respectively, and use 2-photon microscopy to analyze the spatio-temporal relationship between feedforward and feedback inputs in the OB. This comprehensive approach will provide an explanation of how feedforward and feedback inputs are integrated to process complex stimuli.
Max ERC Funding
1 500 000 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym CSG
Project C° symplectic geometry
Researcher (PI) Lev Buhovski
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2017-STG
Summary "The objective of this proposal is to study ""continuous"" (or C^0) objects, as well as C^0 properties of smooth objects, in the field of symplectic geometry and topology. C^0 symplectic geometry has seen spectacular progress in recent years, drawing attention of mathematicians from various background. The proposed study aims to discover new fascinating C^0 phenomena in symplectic geometry.
One circle of questions concerns symplectic and Hamiltonian homeomorphisms. Recent studies indicate that these objects possess both rigidity and flexibility, appearing in surprising and counter-intuitive ways. Our understanding of symplectic and Hamiltonian homeomorphisms is far from being satisfactory, and here we intend to study questions related to action of symplectic homeomorphisms on submanifolds. Some other questions are about Hamiltonian homeomorphisms in relation to the celebrated Arnold conjecture. The PI suggests to study spectral invariants of continuous Hamiltonian flows, which allow to formulate the C^0 Arnold conjecture in higher dimensions. Another central problem that the PI will work on is the C^0 flux conjecture.
A second circle of questions is about the Poisson bracket operator, and its functional-theoretic properties. The first question concerns the lower bound for the Poisson bracket invariant of a cover, conjectured by L. Polterovich who indicated relations between this problem and quantum mechanics. Another direction aims to study the C^0 rigidity versus flexibility of the L_p norm of the Poisson bracket. Despite a recent progress in dimension two showing rigidity, very little is known in higher dimensions. The PI proposes to use combination of tools from topology and from hard analysis in order to address this question, whose solution will be a big step towards understanding functional-theoretic properties of the Poisson bracket operator."
Summary
"The objective of this proposal is to study ""continuous"" (or C^0) objects, as well as C^0 properties of smooth objects, in the field of symplectic geometry and topology. C^0 symplectic geometry has seen spectacular progress in recent years, drawing attention of mathematicians from various background. The proposed study aims to discover new fascinating C^0 phenomena in symplectic geometry.
One circle of questions concerns symplectic and Hamiltonian homeomorphisms. Recent studies indicate that these objects possess both rigidity and flexibility, appearing in surprising and counter-intuitive ways. Our understanding of symplectic and Hamiltonian homeomorphisms is far from being satisfactory, and here we intend to study questions related to action of symplectic homeomorphisms on submanifolds. Some other questions are about Hamiltonian homeomorphisms in relation to the celebrated Arnold conjecture. The PI suggests to study spectral invariants of continuous Hamiltonian flows, which allow to formulate the C^0 Arnold conjecture in higher dimensions. Another central problem that the PI will work on is the C^0 flux conjecture.
A second circle of questions is about the Poisson bracket operator, and its functional-theoretic properties. The first question concerns the lower bound for the Poisson bracket invariant of a cover, conjectured by L. Polterovich who indicated relations between this problem and quantum mechanics. Another direction aims to study the C^0 rigidity versus flexibility of the L_p norm of the Poisson bracket. Despite a recent progress in dimension two showing rigidity, very little is known in higher dimensions. The PI proposes to use combination of tools from topology and from hard analysis in order to address this question, whose solution will be a big step towards understanding functional-theoretic properties of the Poisson bracket operator."
Max ERC Funding
1 345 282 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym DeepInternal
Project Going Deep and Blind with Internal Statistics
Researcher (PI) Michal IRANI
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a “blind” manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
Summary
Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a “blind” manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
Max ERC Funding
2 466 940 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym DELPHI
Project Computing Answers to Complex Questions in Broad Domains
Researcher (PI) Jonathan Berant
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary The explosion of information around us has democratized knowledge and transformed its availability for
people around the world. Still, since information is mediated through automated systems, access is bounded
by their ability to understand language.
Consider an economist asking “What fraction of the top-5 growing countries last year raised their co2 emission?”.
While the required information is available, answering such complex questions automatically is
not possible. Current question answering systems can answer simple questions in broad domains, or complex
questions in narrow domains. However, broad and complex questions are beyond the reach of state-of-the-art.
This is because systems are unable to decompose questions into their parts, and find the relevant information
in multiple sources. Further, as answering such questions is hard for people, collecting large datasets to train
such models is prohibitive.
In this proposal I ask: Can computers answer broad and complex questions that require reasoning over
multiple modalities? I argue that by synthesizing the advantages of symbolic and distributed representations
the answer will be “yes”. My thesis is that symbolic representations are suitable for meaning composition, as
they provide interpretability, coverage, and modularity. Complementarily, distributed representations (learned
by neural nets) excel at capturing the fuzziness of language. I propose a framework where complex questions
are symbolically decomposed into sub-questions, each is answered with a neural network, and the final answer
is computed from all gathered information.
This research tackles foundational questions in language understanding. What is the right representation
for reasoning in language? Can models learn to perform complex actions in the face of paucity of data?
Moreover, my research, if successful, will transform how we interact with machines, and define a role for
them as research assistants in science, education, and our daily life.
Summary
The explosion of information around us has democratized knowledge and transformed its availability for
people around the world. Still, since information is mediated through automated systems, access is bounded
by their ability to understand language.
Consider an economist asking “What fraction of the top-5 growing countries last year raised their co2 emission?”.
While the required information is available, answering such complex questions automatically is
not possible. Current question answering systems can answer simple questions in broad domains, or complex
questions in narrow domains. However, broad and complex questions are beyond the reach of state-of-the-art.
This is because systems are unable to decompose questions into their parts, and find the relevant information
in multiple sources. Further, as answering such questions is hard for people, collecting large datasets to train
such models is prohibitive.
In this proposal I ask: Can computers answer broad and complex questions that require reasoning over
multiple modalities? I argue that by synthesizing the advantages of symbolic and distributed representations
the answer will be “yes”. My thesis is that symbolic representations are suitable for meaning composition, as
they provide interpretability, coverage, and modularity. Complementarily, distributed representations (learned
by neural nets) excel at capturing the fuzziness of language. I propose a framework where complex questions
are symbolically decomposed into sub-questions, each is answered with a neural network, and the final answer
is computed from all gathered information.
This research tackles foundational questions in language understanding. What is the right representation
for reasoning in language? Can models learn to perform complex actions in the face of paucity of data?
Moreover, my research, if successful, will transform how we interact with machines, and define a role for
them as research assistants in science, education, and our daily life.
Max ERC Funding
1 499 375 €
Duration
Start date: 2019-04-01, End date: 2024-03-31
Project acronym DIFFOP
Project Nonlinear Data and Signal Analysis with Diffusion Operators
Researcher (PI) Ronen TALMON
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Nowadays, extensive collection and storage of massive data sets have become a routine in multiple disciplines and in everyday life. These large amounts of intricate data often make data samples arithmetic and basic comparisons problematic, raising new challenges to traditional data analysis objectives such as filtering and prediction. Furthermore, the availability of such data constantly pushes the boundaries of data analysis to new emerging domains, ranging from neuronal and social network analysis to multimodal sensor fusion. The combination of evolved data and new domains drives a fundamental change in the field of data analysis. Indeed, many classical model-based techniques have become obsolete since their models do not embody the richness of the collected data. Today, one notable avenue of research is the development of nonlinear techniques that transition from data to creating representations, without deriving models in closed-form. The vast majority of such existing data-driven methods operate directly on the data, a hard task by itself when the data are large and elaborated. The goal of this research is to develop a fundamentally new methodology for high dimensional data analysis with diffusion operators, making use of recent transformative results in manifold and geometry learning. More concretely, shifting the focus from processing the data samples themselves and considering instead structured data through the lens of diffusion operators will introduce new powerful “handles” to data, capturing their complexity efficiently. We will study the basic theory behind this nonlinear analysis, develop new operators for this purpose, and devise efficient data-driven algorithms. In addition, we will explore how our approach can be leveraged for devising efficient solutions to a broad range of open real-world data analysis problems, involving intrinsic representations, sensor fusion, time-series analysis, network connectivity inference, and domain adaptation.
Summary
Nowadays, extensive collection and storage of massive data sets have become a routine in multiple disciplines and in everyday life. These large amounts of intricate data often make data samples arithmetic and basic comparisons problematic, raising new challenges to traditional data analysis objectives such as filtering and prediction. Furthermore, the availability of such data constantly pushes the boundaries of data analysis to new emerging domains, ranging from neuronal and social network analysis to multimodal sensor fusion. The combination of evolved data and new domains drives a fundamental change in the field of data analysis. Indeed, many classical model-based techniques have become obsolete since their models do not embody the richness of the collected data. Today, one notable avenue of research is the development of nonlinear techniques that transition from data to creating representations, without deriving models in closed-form. The vast majority of such existing data-driven methods operate directly on the data, a hard task by itself when the data are large and elaborated. The goal of this research is to develop a fundamentally new methodology for high dimensional data analysis with diffusion operators, making use of recent transformative results in manifold and geometry learning. More concretely, shifting the focus from processing the data samples themselves and considering instead structured data through the lens of diffusion operators will introduce new powerful “handles” to data, capturing their complexity efficiently. We will study the basic theory behind this nonlinear analysis, develop new operators for this purpose, and devise efficient data-driven algorithms. In addition, we will explore how our approach can be leveraged for devising efficient solutions to a broad range of open real-world data analysis problems, involving intrinsic representations, sensor fusion, time-series analysis, network connectivity inference, and domain adaptation.
Max ERC Funding
1 260 000 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym EffectiveTG
Project Effective Methods in Tame Geometry and Applications in Arithmetic and Dynamics
Researcher (PI) Gal BINYAMINI
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary Tame geometry studies structures in which every definable set has a
finite geometric complexity. The study of tame geometry spans several
interrelated mathematical fields, including semialgebraic,
subanalytic, and o-minimal geometry. The past decade has seen the
emergence of a spectacular link between tame geometry and arithmetic
following the discovery of the fundamental Pila-Wilkie counting
theorem and its applications in unlikely diophantine
intersections. The P-W theorem itself relies crucially on the
Yomdin-Gromov theorem, a classical result of tame geometry with
fundamental applications in smooth dynamics.
It is natural to ask whether the complexity of a tame set can be
estimated effectively in terms of the defining formulas. While a large
body of work is devoted to answering such questions in the
semialgebraic case, surprisingly little is known concerning more
general tame structures - specifically those needed in recent
applications to arithmetic. The nature of the link between tame
geometry and arithmetic is such that any progress toward effectivizing
the theory of tame structures will likely lead to effective results
in the domain of unlikely intersections. Similarly, a more effective
version of the Yomdin-Gromov theorem is known to imply important
consequences in smooth dynamics.
The proposed research will approach effectivity in tame geometry from
a fundamentally new direction, bringing to bear methods from the
theory of differential equations which have until recently never been
used in this context. Toward this end, our key goals will be to gain
insight into the differential algebraic and complex analytic structure
of tame sets; and to apply this insight in combination with results
from the theory of differential equations to effectivize key results
in tame geometry and its applications to arithmetic and dynamics. I
believe that my preliminary work in this direction amply demonstrates
the feasibility and potential of this approach.
Summary
Tame geometry studies structures in which every definable set has a
finite geometric complexity. The study of tame geometry spans several
interrelated mathematical fields, including semialgebraic,
subanalytic, and o-minimal geometry. The past decade has seen the
emergence of a spectacular link between tame geometry and arithmetic
following the discovery of the fundamental Pila-Wilkie counting
theorem and its applications in unlikely diophantine
intersections. The P-W theorem itself relies crucially on the
Yomdin-Gromov theorem, a classical result of tame geometry with
fundamental applications in smooth dynamics.
It is natural to ask whether the complexity of a tame set can be
estimated effectively in terms of the defining formulas. While a large
body of work is devoted to answering such questions in the
semialgebraic case, surprisingly little is known concerning more
general tame structures - specifically those needed in recent
applications to arithmetic. The nature of the link between tame
geometry and arithmetic is such that any progress toward effectivizing
the theory of tame structures will likely lead to effective results
in the domain of unlikely intersections. Similarly, a more effective
version of the Yomdin-Gromov theorem is known to imply important
consequences in smooth dynamics.
The proposed research will approach effectivity in tame geometry from
a fundamentally new direction, bringing to bear methods from the
theory of differential equations which have until recently never been
used in this context. Toward this end, our key goals will be to gain
insight into the differential algebraic and complex analytic structure
of tame sets; and to apply this insight in combination with results
from the theory of differential equations to effectivize key results
in tame geometry and its applications to arithmetic and dynamics. I
believe that my preliminary work in this direction amply demonstrates
the feasibility and potential of this approach.
Max ERC Funding
1 155 027 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym FTHPC
Project Fault Tolerant High Performance Computing
Researcher (PI) Oded Schwartz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Summary
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Max ERC Funding
1 824 467 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym HARMONIC
Project Discrete harmonic analysis for computer science
Researcher (PI) Yuval FILMUS
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory.
Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science.
The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Summary
Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory.
Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science.
The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Max ERC Funding
1 473 750 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym HD-App
Project New horizons in homogeneous dynamics and its applications
Researcher (PI) Uri SHAPIRA
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE1, ERC-2017-STG
Summary We present a large variety of novel lines of research in Homogeneous Dynamics with emphasis on the dynamics of the diagonal group. Both new and classical applications are suggested, most notably to
• Number Theory
• Geometry of Numbers
• Diophantine approximation.
Emphasis is given to applications in
• Diophantine properties of algebraic numbers.
The proposal is built of 4 sections.
(1) In the first section we discuss questions pertaining to topological and distributional aspects of periodic orbits of the diagonal group in the space of lattices in Euclidean space. These objects encode deep information regarding Diophantine properties of algebraic numbers. We demonstrate how these questions are closely related to, and may help solve, some of the central open problems in the geometry of numbers and Diophantine approximation.
(2) In the second section we discuss Minkowski's conjecture regarding integral values of products of linear forms. For over a century this central conjecture is resisting a general solution and a novel and promising strategy for its resolution is presented.
(3) In the third section, a novel conjecture regarding limiting distribution of infinite-volume-orbits is presented, in analogy with existing results regarding finite-volume-orbits. Then, a variety of applications and special cases are discussed, some of which give new results regarding classical concepts such as continued fraction expansion of rational numbers.
(4) In the last section we suggest a novel strategy to attack one of the most notorious open problems in Diophantine approximation, namely: Do cubic numbers have unbounded continued fraction expansion? This novel strategy leads us to embark on a systematic study of an area in homogeneous dynamics which has not been studied yet. Namely, the dynamics in the space of discrete subgroups of rank k in R^n (identified up to scaling).
Summary
We present a large variety of novel lines of research in Homogeneous Dynamics with emphasis on the dynamics of the diagonal group. Both new and classical applications are suggested, most notably to
• Number Theory
• Geometry of Numbers
• Diophantine approximation.
Emphasis is given to applications in
• Diophantine properties of algebraic numbers.
The proposal is built of 4 sections.
(1) In the first section we discuss questions pertaining to topological and distributional aspects of periodic orbits of the diagonal group in the space of lattices in Euclidean space. These objects encode deep information regarding Diophantine properties of algebraic numbers. We demonstrate how these questions are closely related to, and may help solve, some of the central open problems in the geometry of numbers and Diophantine approximation.
(2) In the second section we discuss Minkowski's conjecture regarding integral values of products of linear forms. For over a century this central conjecture is resisting a general solution and a novel and promising strategy for its resolution is presented.
(3) In the third section, a novel conjecture regarding limiting distribution of infinite-volume-orbits is presented, in analogy with existing results regarding finite-volume-orbits. Then, a variety of applications and special cases are discussed, some of which give new results regarding classical concepts such as continued fraction expansion of rational numbers.
(4) In the last section we suggest a novel strategy to attack one of the most notorious open problems in Diophantine approximation, namely: Do cubic numbers have unbounded continued fraction expansion? This novel strategy leads us to embark on a systematic study of an area in homogeneous dynamics which has not been studied yet. Namely, the dynamics in the space of discrete subgroups of rank k in R^n (identified up to scaling).
Max ERC Funding
1 432 730 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym HOLI
Project Deep Learning for Holistic Inference
Researcher (PI) Amir Globerson
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Summary
Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Max ERC Funding
1 932 500 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym HomDyn
Project Homogenous dynamics, arithmetic and equidistribution
Researcher (PI) Elon Lindenstrauss
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups,
and propose to tackle a wide range of problems in the area, including the central open problems.
One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$.
A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term.
One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics.
Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Summary
We consider the dynamics of actions on homogeneous spaces of algebraic groups,
and propose to tackle a wide range of problems in the area, including the central open problems.
One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$.
A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term.
One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics.
Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Max ERC Funding
2 090 625 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym HOWPER
Project An open or closed process: Determining the global scheme of perception
Researcher (PI) Ehud AHISSAR
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), LS5, ERC-2017-ADG
Summary Despite decades of intensive research, there is no agreement about the general scheme of perception: Is the external object a trigger for a brain-internal process (open-loop perception, OLP) or is the object included in brain dynamics during the entire perceptual process (closed-loop perception, CLP)? HOWPER is designed to provide a definite answer to this question in the cases of human touch and vision. What enables this critical test is our development of an explicit CLP hypothesis, which will be contrasted, via specific testable predictions, with the OLP scheme. In the event that CLP is validated, HOWPER will introduce a radical paradigm shift in the study of perception, since almost all current experiments are guided, implicitly or explicitly, by the OLP scheme. If OLP is confirmed, HOWPER will provide the first formal affirmation for its superiority over CLP.
Our approach in this novel paradigm is based on a triangle of interactive efforts comprising theory, analytical experiments, and synthetic experiments. The theoretical effort (WP1) will be based on the core theoretical framework already developed in our lab. The analytical experiments (WP2) will involve human perceivers. The synthetic experiments (WP3) will be performed on synthesized artificial perceivers. The fourth WP will exploit our novel rat-machine hybrid model for testing the neural applicability of the insights gained in the other WPs, whereas the fifth WP will translate our insights into novel visual-to-tactile sensory substitution algorithms.
HOWPER is expected to either revolutionize or significantly advance the field of human perception, to greatly improve visual to tactile sensory substitution approaches and to contribute novel biomimetic algorithms for autonomous robotic agents.
Summary
Despite decades of intensive research, there is no agreement about the general scheme of perception: Is the external object a trigger for a brain-internal process (open-loop perception, OLP) or is the object included in brain dynamics during the entire perceptual process (closed-loop perception, CLP)? HOWPER is designed to provide a definite answer to this question in the cases of human touch and vision. What enables this critical test is our development of an explicit CLP hypothesis, which will be contrasted, via specific testable predictions, with the OLP scheme. In the event that CLP is validated, HOWPER will introduce a radical paradigm shift in the study of perception, since almost all current experiments are guided, implicitly or explicitly, by the OLP scheme. If OLP is confirmed, HOWPER will provide the first formal affirmation for its superiority over CLP.
Our approach in this novel paradigm is based on a triangle of interactive efforts comprising theory, analytical experiments, and synthetic experiments. The theoretical effort (WP1) will be based on the core theoretical framework already developed in our lab. The analytical experiments (WP2) will involve human perceivers. The synthetic experiments (WP3) will be performed on synthesized artificial perceivers. The fourth WP will exploit our novel rat-machine hybrid model for testing the neural applicability of the insights gained in the other WPs, whereas the fifth WP will translate our insights into novel visual-to-tactile sensory substitution algorithms.
HOWPER is expected to either revolutionize or significantly advance the field of human perception, to greatly improve visual to tactile sensory substitution approaches and to contribute novel biomimetic algorithms for autonomous robotic agents.
Max ERC Funding
2 493 441 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym iEXTRACT
Project Information Extraction for Everyone
Researcher (PI) Yoav Goldberg
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward.
Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language.
I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query
unstructured texts via easily structured forms.
More specifically, I aim develop:
(a) a novel structured representation that abstracts much of the complexity of natural language;
(b) algorithms that derive these representations from texts;
(c) an accessible rule language to query this representation;
(d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention.
The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of
domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and
advance their profession.
Summary
Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward.
Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language.
I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query
unstructured texts via easily structured forms.
More specifically, I aim develop:
(a) a novel structured representation that abstracts much of the complexity of natural language;
(b) algorithms that derive these representations from texts;
(c) an accessible rule language to query this representation;
(d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention.
The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of
domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and
advance their profession.
Max ERC Funding
1 499 354 €
Duration
Start date: 2019-05-01, End date: 2024-04-30
Project acronym iMove
Project Translating rewards to eye movements
Researcher (PI) Matityahu JOSHUA
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), LS5, ERC-2017-STG
Summary The drive for rewards controls almost every aspect of our behavior, from stereotypic reflexive behaviors to complex voluntary action. It is therefore not surprising that the symptoms of neurological disorders that interrupt reward processing, such as those stemming from drug-abuse and depression, include deficits in the capacity to make even simple movements. Accordingly, how do rewards drive and shape movements? The brain uses two major subcortical networks to drive behavior: the basal ganglia and the cerebellum. Both areas are essential for the control of movement as damage to either structure leads to severe motor disabilities. Research on the basal ganglia has highlighted their importance in the control of reward-driven behavior-but how the reward information interacts with sensorimotor signals to drive the motor periphery is unknown. By contrast, research on the cerebellum has focused primarily on how sensory error signals are used to optimize motor commands but has mostly ignored the modulatory factors that influence behavior, such as reward. My goal is to unify research on the basal ganglia and cerebellum in order to understand how the computations underlying the influence of reward on action are implemented in the brain. I hypothesize that rewards drive and shape the motor commands in both subcortical networks, albeit with differing behavioral functions. While in the basal ganglia, information about reward is used to mediate selection between multiple actions; I predict that, in the cerebellum, reward potentiates movements to drive more accurate behavior. I will use the monkey smooth pursuit eye movement system as a powerful model motor system to study the neural mechanisms by which reward influences motor processing. I will combine the use of novel behavioral paradigms together with novel application of neural recording and optogenetic stimulation in primates to probe activity of neurons in the cerebral cortex, basal ganglia, and cerebellum.
Summary
The drive for rewards controls almost every aspect of our behavior, from stereotypic reflexive behaviors to complex voluntary action. It is therefore not surprising that the symptoms of neurological disorders that interrupt reward processing, such as those stemming from drug-abuse and depression, include deficits in the capacity to make even simple movements. Accordingly, how do rewards drive and shape movements? The brain uses two major subcortical networks to drive behavior: the basal ganglia and the cerebellum. Both areas are essential for the control of movement as damage to either structure leads to severe motor disabilities. Research on the basal ganglia has highlighted their importance in the control of reward-driven behavior-but how the reward information interacts with sensorimotor signals to drive the motor periphery is unknown. By contrast, research on the cerebellum has focused primarily on how sensory error signals are used to optimize motor commands but has mostly ignored the modulatory factors that influence behavior, such as reward. My goal is to unify research on the basal ganglia and cerebellum in order to understand how the computations underlying the influence of reward on action are implemented in the brain. I hypothesize that rewards drive and shape the motor commands in both subcortical networks, albeit with differing behavioral functions. While in the basal ganglia, information about reward is used to mediate selection between multiple actions; I predict that, in the cerebellum, reward potentiates movements to drive more accurate behavior. I will use the monkey smooth pursuit eye movement system as a powerful model motor system to study the neural mechanisms by which reward influences motor processing. I will combine the use of novel behavioral paradigms together with novel application of neural recording and optogenetic stimulation in primates to probe activity of neurons in the cerebral cortex, basal ganglia, and cerebellum.
Max ERC Funding
1 570 000 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym LiftMatch
Project Lifting Methods for Global Matching Problems
Researcher (PI) Yaron LIPMAN
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary "This proposal presents a research program aimed at breaking new ground in theoretical, algorithmic and practical aspects of the notoriously difficult matching problems. The main goal is developing a unified algorithmic framework for matching problems that enjoys theoretical guarantees and high practical value for ``real-life'' applications.
The core methodological aspect is the modelling of matching problems as high-dimensional, lifted convex problems that can be efficiently approximated. We present several case-studies of this methodology, constructing efficient algorithms for approximating the solutions of important instances of the matching problem. The results already demonstrate state of the art performance and are backed-up with novel theoretical analysis proving tightness and correctness of the suggested convexifications for certain classes of non-trivial input.
This proposal has ``high risk - high impact'' profile: The ``high-risk"" aspect of this proposal comes from the hardness of general matching problems which, when faithfully represented, are NP-hard. However, as we demonstrate, combining convex optimization tools in a careful way, that takes into account computational complexity, is likely to push forward the limits of current algorithmic solutions. The ``High-impact"" will be immediate: As matching problems exist in almost every aspect of scientific research, practical generic algorithms for matching problems are likely to have a significant influence.
We believe that the inroads and preliminary results presented in this proposal already provide strong evidence for the potential success of the suggested research program."
Summary
"This proposal presents a research program aimed at breaking new ground in theoretical, algorithmic and practical aspects of the notoriously difficult matching problems. The main goal is developing a unified algorithmic framework for matching problems that enjoys theoretical guarantees and high practical value for ``real-life'' applications.
The core methodological aspect is the modelling of matching problems as high-dimensional, lifted convex problems that can be efficiently approximated. We present several case-studies of this methodology, constructing efficient algorithms for approximating the solutions of important instances of the matching problem. The results already demonstrate state of the art performance and are backed-up with novel theoretical analysis proving tightness and correctness of the suggested convexifications for certain classes of non-trivial input.
This proposal has ``high risk - high impact'' profile: The ``high-risk"" aspect of this proposal comes from the hardness of general matching problems which, when faithfully represented, are NP-hard. However, as we demonstrate, combining convex optimization tools in a careful way, that takes into account computational complexity, is likely to push forward the limits of current algorithmic solutions. The ``High-impact"" will be immediate: As matching problems exist in almost every aspect of scientific research, practical generic algorithms for matching problems are likely to have a significant influence.
We believe that the inroads and preliminary results presented in this proposal already provide strong evidence for the potential success of the suggested research program."
Max ERC Funding
1 675 640 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym LightCrypt
Project New Directions in Lightweight Cryptanalysis
Researcher (PI) Nathan Keller
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Over the next five years, fifty billion new smart devices will be connected to the Internet of Things (IoT), creating a revolution in the way we interact with our environment. Such resource constrained devices will require lightweight cryptography to protect them and us from bad actors. Unfortunately, such schemes can be highly vulnerable: Two notable examples are the encryption schemes used in GSM cellular phones and in car remote controls - both broken by the PI. We claim that it is not sufficient to adjust the current design and analysis tools to the constrained environment. Instead, we must establish a new research methodology, aiming directly at the problems arising in the 'lightweight realm'.
We plan to concentrate on four main directions. First, we will go 'a level up' to study the security of generic lightweight building blocks in order to find the minimal number of operations required to transition from insecure to secure designs. Second, when considering specific ciphers we will pursue practical low complexity attacks, which are more relevant to the lightweight realm than standard theoretical attacks. Third, we will pursue new directions toward establishing 'white-box cryptography' – a central challenge in IoT cryptography. Finally, we will explore further applications of discrete analysis to lightweight cryptography, trying to establish rigorous conditions under which the standard cryptanalytic techniques apply in order to avoid unnecessarily pessimistic security estimates.
For the near future, we hope that our research will make it possible to detect and fix weaknesses in existing lightweight ciphers before they can be exploited by the 'bad guys'. Looking forward farther, we hope to understand how to design new secure lightweight ciphers for the billions of IoT devices to come.
Summary
Over the next five years, fifty billion new smart devices will be connected to the Internet of Things (IoT), creating a revolution in the way we interact with our environment. Such resource constrained devices will require lightweight cryptography to protect them and us from bad actors. Unfortunately, such schemes can be highly vulnerable: Two notable examples are the encryption schemes used in GSM cellular phones and in car remote controls - both broken by the PI. We claim that it is not sufficient to adjust the current design and analysis tools to the constrained environment. Instead, we must establish a new research methodology, aiming directly at the problems arising in the 'lightweight realm'.
We plan to concentrate on four main directions. First, we will go 'a level up' to study the security of generic lightweight building blocks in order to find the minimal number of operations required to transition from insecure to secure designs. Second, when considering specific ciphers we will pursue practical low complexity attacks, which are more relevant to the lightweight realm than standard theoretical attacks. Third, we will pursue new directions toward establishing 'white-box cryptography' – a central challenge in IoT cryptography. Finally, we will explore further applications of discrete analysis to lightweight cryptography, trying to establish rigorous conditions under which the standard cryptanalytic techniques apply in order to avoid unnecessarily pessimistic security estimates.
For the near future, we hope that our research will make it possible to detect and fix weaknesses in existing lightweight ciphers before they can be exploited by the 'bad guys'. Looking forward farther, we hope to understand how to design new secure lightweight ciphers for the billions of IoT devices to come.
Max ERC Funding
1 487 500 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym NEIMO
Project Neuronal regulation of immunity
Researcher (PI) Asia Rolls
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), LS5, ERC-2017-STG
Summary The philosophical ‘mind-body problem’ examines the relationship between the mind, a mental process, and its impact on the body, a physical entity. It has multiple manifestations in modern medicine, for example, the emergence of disease following stress, or recovery in response to placebo treatment. Nevertheless, this fundamental aspect of physiology remains largely unexplored. Here we propose to approach this question by focusing on a specific aspect of brain-body interactions. Namely, to study how the brain's interoceptive site, the insular cortex, affects the immune system, the body’s main protective mechanism. We will explore the interactions between the insula and the immune system on two levels. First, we will focus on the insula and its projections within the brain; secondly, we will study the role of projections from the brain, specifically, the peripheral nervous system (PNS), which innervates all the immunological organs. We will use chemogenetics to activate/inhibit the activity of neurons in the insula, and optogenetics to locally control, using light, the activity of sympathetic and parasympathetic neurons innervating immunological sites. We will then characterize the effects of these neuronal manipulations on the immune system to decipher fundamental principles of the dialog between nerve and immune cells with the long term goal of harnessing the brain’s therapeutic potential. My multidisciplinary background places me in a unique position to investigate brain regulation of immunity, and our preliminary data support our central hypothesis that the brain and, specifically, the insula, regulate immune responses. Taken together, this study promises to alter our understanding of brain-body interactions, and to set the stage for novel approaches to analyze how the state of the mind impacts physiology.
Summary
The philosophical ‘mind-body problem’ examines the relationship between the mind, a mental process, and its impact on the body, a physical entity. It has multiple manifestations in modern medicine, for example, the emergence of disease following stress, or recovery in response to placebo treatment. Nevertheless, this fundamental aspect of physiology remains largely unexplored. Here we propose to approach this question by focusing on a specific aspect of brain-body interactions. Namely, to study how the brain's interoceptive site, the insular cortex, affects the immune system, the body’s main protective mechanism. We will explore the interactions between the insula and the immune system on two levels. First, we will focus on the insula and its projections within the brain; secondly, we will study the role of projections from the brain, specifically, the peripheral nervous system (PNS), which innervates all the immunological organs. We will use chemogenetics to activate/inhibit the activity of neurons in the insula, and optogenetics to locally control, using light, the activity of sympathetic and parasympathetic neurons innervating immunological sites. We will then characterize the effects of these neuronal manipulations on the immune system to decipher fundamental principles of the dialog between nerve and immune cells with the long term goal of harnessing the brain’s therapeutic potential. My multidisciplinary background places me in a unique position to investigate brain regulation of immunity, and our preliminary data support our central hypothesis that the brain and, specifically, the insula, regulate immune responses. Taken together, this study promises to alter our understanding of brain-body interactions, and to set the stage for novel approaches to analyze how the state of the mind impacts physiology.
Max ERC Funding
1 500 000 €
Duration
Start date: 2017-11-01, End date: 2022-10-31
Project acronym NovelExperiSENSE
Project How experience shapes brain specializations
Researcher (PI) Amir AMEDI
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), LS5, ERC-2017-COG
Summary "The main goal of my research is to understand the origins of brain specialization and its flexibility across the lifespan. In our modern society Novel Sensory Experiences (NSE) and augmenting technologies are increasingly part of our everyday life (e.g. www.pokemongo.com). How does our brain deal with processing these new experiences? How much of its functional specializations and organization principle (e.g. topography in early sensory and motor brain areas) are already predefined by evolution and are locked after critical periods (i.e. early in life)? Our main hypothesis is that computational tasks, cognitive goals and partially innate network connectivity patterns, rather than sensory input per se, drive the emergence of brain specializations even after the critical periods pass. We base this also on our extensive experience with teaching blind to ""see"" with their ears using sensory substitution devices (SSDs) and the resulting brain specializations (including putative mechanisms). Here, we will extend, generalize and consolidate this theory by tracking in healthy adults the development of NSE (Novel since never experienced it before during life nor evolution). We suggest here to build 5 novel topographic devices (e.g. we developed recently the IRThermoSense for perceiving heat information and beyond walls thermal images without interfering with regular vision). We will also work with providing novel experiences to congenitally sensory deprived populations (e.g. deaf) to promote sensory restoration. In both cases, we aim to characterize the integration of NSEs in the brain using longitudinal self and supervised learning in virtual and real world environments and cutting-edge biologically inspired computational neuroimaging tools, with special emphasis on topography (similar to how natural senses are represented e.g. body homunculus and retinotopy), task-selective specializations, multisensory and sensorimotor binding, and the emergence of distal attribution.
"
Summary
"The main goal of my research is to understand the origins of brain specialization and its flexibility across the lifespan. In our modern society Novel Sensory Experiences (NSE) and augmenting technologies are increasingly part of our everyday life (e.g. www.pokemongo.com). How does our brain deal with processing these new experiences? How much of its functional specializations and organization principle (e.g. topography in early sensory and motor brain areas) are already predefined by evolution and are locked after critical periods (i.e. early in life)? Our main hypothesis is that computational tasks, cognitive goals and partially innate network connectivity patterns, rather than sensory input per se, drive the emergence of brain specializations even after the critical periods pass. We base this also on our extensive experience with teaching blind to ""see"" with their ears using sensory substitution devices (SSDs) and the resulting brain specializations (including putative mechanisms). Here, we will extend, generalize and consolidate this theory by tracking in healthy adults the development of NSE (Novel since never experienced it before during life nor evolution). We suggest here to build 5 novel topographic devices (e.g. we developed recently the IRThermoSense for perceiving heat information and beyond walls thermal images without interfering with regular vision). We will also work with providing novel experiences to congenitally sensory deprived populations (e.g. deaf) to promote sensory restoration. In both cases, we aim to characterize the integration of NSEs in the brain using longitudinal self and supervised learning in virtual and real world environments and cutting-edge biologically inspired computational neuroimaging tools, with special emphasis on topography (similar to how natural senses are represented e.g. body homunculus and retinotopy), task-selective specializations, multisensory and sensorimotor binding, and the emergence of distal attribution.
"
Max ERC Funding
2 000 000 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym PATHWISE
Project Pathwise methods and stochastic calculus in the path towards understanding high-dimensional phenomena
Researcher (PI) Ronen ELDAN
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary Concepts from the theory of high-dimensional phenomena play a role in several areas of mathematics, statistics and computer science. Many results in this theory rely on tools and ideas originating in adjacent fields, such as transportation of measure, semigroup theory and potential theory. In recent years, a new symbiosis with the theory of stochastic calculus is emerging.
In a few recent works, by developing a novel approach of pathwise analysis, my coauthors and I managed to make progress in several central high-dimensional problems. This emerging method relies on the introduction of a stochastic process which allows one to associate quantities and properties related to the high-dimensional object of interest to corresponding notions in stochastic calculus, thus making the former tractable through the analysis of the latter.
We propose to extend this approach towards several long-standing open problems in high dimensional probability and geometry. First, we aim to explore the role of convexity in concentration inequalities, focusing on three central conjectures regarding the distribution of mass on high dimensional convex bodies: the Kannan-Lov'asz-Simonovits (KLS) conjecture, the variance conjecture and the hyperplane conjecture as well as emerging connections with quantitative central limit theorems, entropic jumps and stability bounds for the Brunn-Minkowski inequality. Second, we are interested in dimension-free inequalities in Gaussian space and on the Boolean hypercube: isoperimetric and noise-stability inequalities and robustness thereof, transportation-entropy and concentration inequalities, regularization properties of the heat-kernel and L_1 versions of hypercontractivity. Finally, we are interested in developing new methods for the analysis of Gibbs distributions with a mean-field behavior, related to the new theory of nonlinear large deviations, and towards questions regarding interacting particle systems and the analysis of large networks.
Summary
Concepts from the theory of high-dimensional phenomena play a role in several areas of mathematics, statistics and computer science. Many results in this theory rely on tools and ideas originating in adjacent fields, such as transportation of measure, semigroup theory and potential theory. In recent years, a new symbiosis with the theory of stochastic calculus is emerging.
In a few recent works, by developing a novel approach of pathwise analysis, my coauthors and I managed to make progress in several central high-dimensional problems. This emerging method relies on the introduction of a stochastic process which allows one to associate quantities and properties related to the high-dimensional object of interest to corresponding notions in stochastic calculus, thus making the former tractable through the analysis of the latter.
We propose to extend this approach towards several long-standing open problems in high dimensional probability and geometry. First, we aim to explore the role of convexity in concentration inequalities, focusing on three central conjectures regarding the distribution of mass on high dimensional convex bodies: the Kannan-Lov'asz-Simonovits (KLS) conjecture, the variance conjecture and the hyperplane conjecture as well as emerging connections with quantitative central limit theorems, entropic jumps and stability bounds for the Brunn-Minkowski inequality. Second, we are interested in dimension-free inequalities in Gaussian space and on the Boolean hypercube: isoperimetric and noise-stability inequalities and robustness thereof, transportation-entropy and concentration inequalities, regularization properties of the heat-kernel and L_1 versions of hypercontractivity. Finally, we are interested in developing new methods for the analysis of Gibbs distributions with a mean-field behavior, related to the new theory of nonlinear large deviations, and towards questions regarding interacting particle systems and the analysis of large networks.
Max ERC Funding
1 308 188 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym PCPABF
Project Challenging Computational Infeasibility: PCP and Boolean functions
Researcher (PI) Shmuel Avraham Safra
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE6, ERC-2018-ADG
Summary Computer Science, in particular, Analysis of Algorithms and Computational-Complexity theory, classify algorithmic-problems into feasible ones and those that cannot be efficiently-solved. Many fundamental problems were shown NP-hard, therefore, unless P=NP, they are infeasible.
Consequently, research efforts shifted towards approximation algorithms, which find close-to-optimal solutions for NP-hard optimization problems.
The PCP Theorem and its application to infeasibility of approximation establish that, unless P=NP, there are no efficient approximation algorithms for numerous classical problems; research that won the authors --the PI included-- the 2001 Godel prize.
To show infeasibility of approximation of some fundamental problems, however, a stronger PCP was postulated in 2002, namely, Khot's Unique-Games Conjecture.
It has transformed our understanding of optimization problems, provoked new tools in order to refute it and motivating new sophisticated techniques aimed at proving it.
Recently Khot, Minzer (a student of the PI) and the PI proved a related conjecture: the 2-to-2-Games conjecture (our paper just won Best Paper award at FOCS'18). In light of that progress, recognized by the community as half the distance towards the Unique-Games conjecture, resolving the Unique-Games conjecture seems much more likely.
A field that plays a crucial role in this progress is Analysis of Boolean-functions.
For the recent breakthrough we had to dive deep into expansion properties of the Grassmann-graph.
The insight was subsequently applied to achieve much awaited progress on fundamental properties of the Johnson-graph.
With the emergence of cloud-computing, cryptocurrency, public-ledger and Blockchain technologies, the PCP methodology has found new and exciting applications.
This framework governs SNARKs, which is a new, emerging technology, and the ZCASH technology on top of Blockchain.
This is a thriving research area, but also an extremely vibrant High-Tech sector.
Summary
Computer Science, in particular, Analysis of Algorithms and Computational-Complexity theory, classify algorithmic-problems into feasible ones and those that cannot be efficiently-solved. Many fundamental problems were shown NP-hard, therefore, unless P=NP, they are infeasible.
Consequently, research efforts shifted towards approximation algorithms, which find close-to-optimal solutions for NP-hard optimization problems.
The PCP Theorem and its application to infeasibility of approximation establish that, unless P=NP, there are no efficient approximation algorithms for numerous classical problems; research that won the authors --the PI included-- the 2001 Godel prize.
To show infeasibility of approximation of some fundamental problems, however, a stronger PCP was postulated in 2002, namely, Khot's Unique-Games Conjecture.
It has transformed our understanding of optimization problems, provoked new tools in order to refute it and motivating new sophisticated techniques aimed at proving it.
Recently Khot, Minzer (a student of the PI) and the PI proved a related conjecture: the 2-to-2-Games conjecture (our paper just won Best Paper award at FOCS'18). In light of that progress, recognized by the community as half the distance towards the Unique-Games conjecture, resolving the Unique-Games conjecture seems much more likely.
A field that plays a crucial role in this progress is Analysis of Boolean-functions.
For the recent breakthrough we had to dive deep into expansion properties of the Grassmann-graph.
The insight was subsequently applied to achieve much awaited progress on fundamental properties of the Johnson-graph.
With the emergence of cloud-computing, cryptocurrency, public-ledger and Blockchain technologies, the PCP methodology has found new and exciting applications.
This framework governs SNARKs, which is a new, emerging technology, and the ZCASH technology on top of Blockchain.
This is a thriving research area, but also an extremely vibrant High-Tech sector.
Max ERC Funding
2 059 375 €
Duration
Start date: 2019-10-01, End date: 2024-09-30
Project acronym PCPHDX
Project Probabilistically Checkable Proofs, Agreement Tests, and High Dimensional Expanders
Researcher (PI) Irit DVEER DINUR
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary PCPs capture a striking local to global phenomenon in which a global object such as an NP witness can be checked using local constraints, and its correctness is guaranteed even if only a fraction of the constraints are satisfied.
PCPs are tightly related to hardness of approximation. The relation is essentially due to the fact that exact optimization problems can be reduced to their approximation counterparts through this local to global connection.
We view this local to global connection is a type of high dimensional expansion, akin to relatively new notions of high dimensional expansion (such as coboundary and cosystolic expansion) that have been introduced in the literature recently. We propose to study PCPs and high dimensional expansion together. We describe a concrete notion of “agreement expansion” and propose a systematic study of this question. We show how progress on agreement expansion questions is directly related to some of the most important open questions in PCPs such as the unique games conjecture, and the problem of constructing linear size PCPs.
We also propose to study the phenomenon of high dimensional expansion more broadly and to investigate its relation and applicability to questions in computational complexity that go beyond PCPs, in particular for hardness amplification and for derandomizing direct product constructions.
Summary
PCPs capture a striking local to global phenomenon in which a global object such as an NP witness can be checked using local constraints, and its correctness is guaranteed even if only a fraction of the constraints are satisfied.
PCPs are tightly related to hardness of approximation. The relation is essentially due to the fact that exact optimization problems can be reduced to their approximation counterparts through this local to global connection.
We view this local to global connection is a type of high dimensional expansion, akin to relatively new notions of high dimensional expansion (such as coboundary and cosystolic expansion) that have been introduced in the literature recently. We propose to study PCPs and high dimensional expansion together. We describe a concrete notion of “agreement expansion” and propose a systematic study of this question. We show how progress on agreement expansion questions is directly related to some of the most important open questions in PCPs such as the unique games conjecture, and the problem of constructing linear size PCPs.
We also propose to study the phenomenon of high dimensional expansion more broadly and to investigate its relation and applicability to questions in computational complexity that go beyond PCPs, in particular for hardness amplification and for derandomizing direct product constructions.
Max ERC Funding
1 512 035 €
Duration
Start date: 2018-02-01, End date: 2023-01-31
Project acronym PolSymAGA
Project Polarity and Central-Symmetry in Asymptotic Geometric Analysis
Researcher (PI) Shiri ARTSTEIN
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary Asymptotic Geometric Analysis is a relatively new field, the young finite dimensional cousin of Banach Space theory, functional analysis and classical convexity. It concerns the {\em geometric} study of high, but finite, dimensional objects, where the disorder of many parameters and many dimensions is "regularized" by convexity assumptions.
The proposed research is composed of several connected innovative studies in the frontier of Asymptotic Geometric Analysis, pertaining to the deeper understanding of two fundamental notions: Polarity and Central-Symmetry.
While the main drive comes from Asymptotic Convex Geometry, the applications extend throughout many mathematical fields from analysis, probability and symplectic geometry to combinatorics and computer science. The project will concern: The polarity map for functions, functional covering numbers, measures of Symmetry, Godbersen's conjecture, Mahler's conjecture, Minkowski billiard dynamics and caustics.
My research objectives are twofold. First, to progress towards a solution of the open research questions described in the proposal, which I consider to be pivotal in the field, including Mahler's conjecture, Viterbo's conjecture and Godberesen's conjecture. Some of these questions have already been studied intensively, and the solution is yet to found; progress toward solving them would be of high significance. Secondly, as the studies in this proposal lie at the meeting point of several mathematical fields, and use Asymptotic Geometric Analysis in order to address major questions in other fields, such as Symplectic Geometry and Optimal transport theory, my second goal is to deepen these connections, creating a powerful framework that will lead to a deeper understanding, and the formulation, and resolution, of interesting questions currently unattainable.
Summary
Asymptotic Geometric Analysis is a relatively new field, the young finite dimensional cousin of Banach Space theory, functional analysis and classical convexity. It concerns the {\em geometric} study of high, but finite, dimensional objects, where the disorder of many parameters and many dimensions is "regularized" by convexity assumptions.
The proposed research is composed of several connected innovative studies in the frontier of Asymptotic Geometric Analysis, pertaining to the deeper understanding of two fundamental notions: Polarity and Central-Symmetry.
While the main drive comes from Asymptotic Convex Geometry, the applications extend throughout many mathematical fields from analysis, probability and symplectic geometry to combinatorics and computer science. The project will concern: The polarity map for functions, functional covering numbers, measures of Symmetry, Godbersen's conjecture, Mahler's conjecture, Minkowski billiard dynamics and caustics.
My research objectives are twofold. First, to progress towards a solution of the open research questions described in the proposal, which I consider to be pivotal in the field, including Mahler's conjecture, Viterbo's conjecture and Godberesen's conjecture. Some of these questions have already been studied intensively, and the solution is yet to found; progress toward solving them would be of high significance. Secondly, as the studies in this proposal lie at the meeting point of several mathematical fields, and use Asymptotic Geometric Analysis in order to address major questions in other fields, such as Symplectic Geometry and Optimal transport theory, my second goal is to deepen these connections, creating a powerful framework that will lead to a deeper understanding, and the formulation, and resolution, of interesting questions currently unattainable.
Max ERC Funding
1 514 125 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym PrefrontalMap
Project Organization and learning-associated dynamics of prefrontal synaptic connectivity
Researcher (PI) Ofer YIZHAR
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), LS5, ERC-2018-COG
Summary How does experience alter the functional architecture of synaptic connections in neural circuits? This question is particularly pertinent for the complex circuits of the medial prefrontal cortex (mPFC), a high-order associative neocortical area that plays a crucial role in flexible, goal-directed behavior. The mPFC is densely interconnected with cortical and subcortical circuits, and its neurons were shown to undergo substantial experience-dependent structural remodeling that is thought to support learning and memory consolidation. However, little is known regarding the synaptic organization of this complex circuit, and of the functional implications of its experience-dependent structural remodeling. In this proposal, we aim to uncover the organization and learning-associated dynamics of functional connectivity in the mouse mPFC.
To obtain high-resolution maps of cell type-specific synaptic connectivity in the mPFC, we will combine single-cell optogenetic manipulation with calcium imaging and electrophysiology in vitro, and establish the circuit-wide organization of connectivity within and between defined projecting neuron populations. We will test the hypothesis that pyramidal neurons projecting to subcortical targets form tightly interconnected subnetworks, and that inhibitory inputs to these networks, through selective innervation, can modulate information output from the mPFC.
To understand how learning changes the functional synaptic organization of the mPFC, we will establish an all-optical system for interrogation of synaptic connectivity in vivo. We will utilize this powerful platform to test the hypothesis that prefrontal-dependent learning is associated with reorganization of local-circuit functional connectivity among identified subcortically-projecting cell assemblies.
Our innovative technology will be widely applicable for neural circuit analysis in a variety of systems, and allow us to gain new insights into the complex circuitry of the mPFC.
Summary
How does experience alter the functional architecture of synaptic connections in neural circuits? This question is particularly pertinent for the complex circuits of the medial prefrontal cortex (mPFC), a high-order associative neocortical area that plays a crucial role in flexible, goal-directed behavior. The mPFC is densely interconnected with cortical and subcortical circuits, and its neurons were shown to undergo substantial experience-dependent structural remodeling that is thought to support learning and memory consolidation. However, little is known regarding the synaptic organization of this complex circuit, and of the functional implications of its experience-dependent structural remodeling. In this proposal, we aim to uncover the organization and learning-associated dynamics of functional connectivity in the mouse mPFC.
To obtain high-resolution maps of cell type-specific synaptic connectivity in the mPFC, we will combine single-cell optogenetic manipulation with calcium imaging and electrophysiology in vitro, and establish the circuit-wide organization of connectivity within and between defined projecting neuron populations. We will test the hypothesis that pyramidal neurons projecting to subcortical targets form tightly interconnected subnetworks, and that inhibitory inputs to these networks, through selective innervation, can modulate information output from the mPFC.
To understand how learning changes the functional synaptic organization of the mPFC, we will establish an all-optical system for interrogation of synaptic connectivity in vivo. We will utilize this powerful platform to test the hypothesis that prefrontal-dependent learning is associated with reorganization of local-circuit functional connectivity among identified subcortically-projecting cell assemblies.
Our innovative technology will be widely applicable for neural circuit analysis in a variety of systems, and allow us to gain new insights into the complex circuitry of the mPFC.
Max ERC Funding
1 880 003 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym ProDIS
Project Provenance for Data-Intensive Systems
Researcher (PI) Daniel Deutch
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary In the context of data-intensive systems, data provenance captures the way in which data is used, combined
and manipulated by the system. Provenance information can for instance be used to reveal whether
data was illegitimately used, to reason about hypothetical data modifications, to assess the trustworthiness
of a computation result, or to explain the rationale underlying the computation.
As data-intensive systems constantly grow in use, in complexity and in the size of data they manipulate,
provenance tracking becomes of paramount importance. In its absence, it is next to impossible to follow the
flow of data through the system. This in turn is extremely harmful for the quality of results, for enforcing
policies, and for the public trust in the systems.
Despite important advancements in research on data provenance, and its possible revolutionary impact,
it is unfortunately uncommon for practical data-intensive systems to support provenance tracking. The
goal of the proposed research is to develop models, algorithms and tools that facilitate provenance
tracking for a wide range of data-intensive systems, that can be applied to large-scale data analytics,
allowing to explain and reason about the computation that took place.
Towards this goal, we will address the following main objectives: (1) supporting provenance for modern
data analytics frameworks such as data exploration and data science, (2) overcoming the computational
overhead incurred by provenance tracking, (3) the development of user-friendly, provenance-based analysis
tools and (4) experimental validation based on the development of prototype tools and benchmarks.
Summary
In the context of data-intensive systems, data provenance captures the way in which data is used, combined
and manipulated by the system. Provenance information can for instance be used to reveal whether
data was illegitimately used, to reason about hypothetical data modifications, to assess the trustworthiness
of a computation result, or to explain the rationale underlying the computation.
As data-intensive systems constantly grow in use, in complexity and in the size of data they manipulate,
provenance tracking becomes of paramount importance. In its absence, it is next to impossible to follow the
flow of data through the system. This in turn is extremely harmful for the quality of results, for enforcing
policies, and for the public trust in the systems.
Despite important advancements in research on data provenance, and its possible revolutionary impact,
it is unfortunately uncommon for practical data-intensive systems to support provenance tracking. The
goal of the proposed research is to develop models, algorithms and tools that facilitate provenance
tracking for a wide range of data-intensive systems, that can be applied to large-scale data analytics,
allowing to explain and reason about the computation that took place.
Towards this goal, we will address the following main objectives: (1) supporting provenance for modern
data analytics frameworks such as data exploration and data science, (2) overcoming the computational
overhead incurred by provenance tracking, (3) the development of user-friendly, provenance-based analysis
tools and (4) experimental validation based on the development of prototype tools and benchmarks.
Max ERC Funding
1 306 250 €
Duration
Start date: 2018-12-01, End date: 2023-11-30
Project acronym REACT
Project Realizable Advanced Cryptography
Researcher (PI) Zvika BRAKERSKI
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary In a free society, there is persistent tension between utility and privacy. Citizens have the basic right to keep their personal information private. However, sometimes keeping our data private could significantly reduce our ability to use this data to benefit ourselves or society. This tension is multiplied many times over in our modern data driven society, where data is utilized using remote algorithms.
State of the art research suggests that new advanced cryptographic primitives can mitigate this tension. These include computing on encrypted data via fully homomorphic encryption, fine grained access control to encrypted data via attribute based encryption, and most recently general purpose program obfuscation, which on paper can solve many of cryptography's long standing problems. However, these primitives are largely either too complicated or not sufficiently founded to be considered for real world applications.
Project REACT will apply foundational theoretical study towards removing the barriers between advanced cryptographic primitives and reality. My viewpoint, supported by my prior research success, is that orders-of-magnitude improvement in efficiency and security requires foundational theoretical study, rather than focusing on optimizations or heuristics. My projection is that progress in this direction will both allow for future realistic implementation of these primitives, reducing said tension, as well as contribute to basic cryptographic study by opening new avenues for future research.
To achieve this goal, I will pursue the following objectives: (i) Studying the computational complexity of underlying hardness assumptions, specifically lattice based, to better understand the level of security we can expect of proposed primitives. (ii) Simplifying and extending the LWE/trapdoor paradigm that underlies many of the new primitives, and that I find incomplete. (iii) Constructing cryptographic graded encoding schemes and obfuscators.
Summary
In a free society, there is persistent tension between utility and privacy. Citizens have the basic right to keep their personal information private. However, sometimes keeping our data private could significantly reduce our ability to use this data to benefit ourselves or society. This tension is multiplied many times over in our modern data driven society, where data is utilized using remote algorithms.
State of the art research suggests that new advanced cryptographic primitives can mitigate this tension. These include computing on encrypted data via fully homomorphic encryption, fine grained access control to encrypted data via attribute based encryption, and most recently general purpose program obfuscation, which on paper can solve many of cryptography's long standing problems. However, these primitives are largely either too complicated or not sufficiently founded to be considered for real world applications.
Project REACT will apply foundational theoretical study towards removing the barriers between advanced cryptographic primitives and reality. My viewpoint, supported by my prior research success, is that orders-of-magnitude improvement in efficiency and security requires foundational theoretical study, rather than focusing on optimizations or heuristics. My projection is that progress in this direction will both allow for future realistic implementation of these primitives, reducing said tension, as well as contribute to basic cryptographic study by opening new avenues for future research.
To achieve this goal, I will pursue the following objectives: (i) Studying the computational complexity of underlying hardness assumptions, specifically lattice based, to better understand the level of security we can expect of proposed primitives. (ii) Simplifying and extending the LWE/trapdoor paradigm that underlies many of the new primitives, and that I find incomplete. (iii) Constructing cryptographic graded encoding schemes and obfuscators.
Max ERC Funding
1 493 803 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym RECENT-TO-REMOTE
Project Remote Memory Consolidation Based on Activity, Connectivity and Stability; Contribution of Neurons and Astrocytes.
Researcher (PI) Inbal GOSHEN
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), LS5, ERC-2018-STG
Summary Our remote memories, weeks to decades long, define who we are and how we experience the world, yet almost nothing is known about the neuronal ensembles encoding them, or the mechanisms underlying the transition from recent to remote memory.
I propose a novel hypothesis explaining the selection of the ensembles supporting remote memories based on their activity, connectivity and stability. I further suggest that 'systems consolidation', underlying the transition from recent to remote memory, is implemented by ongoing interactions between brain regions. Finally, I propose a novel role for astrocytes in recent and remote memory.
My Specific Objectives are to: 1) Provide multi-dimensional characterization of the neuronal ensembles supporting recent and remote memory, by using activity-based tagging to show how recent and remote recall ensembles differ in activity, connectivity and stability. 2) Perturb the functional connectivity underlying 'systems consolidation' by employing connectivity-based tagging to label specific hippocampal and cortical projection neurons, image their activity during recent and remote memory, and causally demonstrate their functional significance to systems consolidation. 3) Determine the role of astrocytes in recent and remote memory consolidation and retrieval. We will manipulate astrocytes to show their role in recent and remote memory, ensemble allocation, and long-distance communication between neuronal populations. We will image astrocytic activity during a memory task to test if they can independently encode memory features, and how their activity corresponds to that of the neurons around them.
This pioneering ERC project, comprised of innovative and ambitious experiments going far and beyond the state of the art in the field, will drive considerable progress to our contemporary understanding of the transition from recent to remote memory, identifying ensemble dynamics and critical projections and how they are modulated by astrocytes.
Summary
Our remote memories, weeks to decades long, define who we are and how we experience the world, yet almost nothing is known about the neuronal ensembles encoding them, or the mechanisms underlying the transition from recent to remote memory.
I propose a novel hypothesis explaining the selection of the ensembles supporting remote memories based on their activity, connectivity and stability. I further suggest that 'systems consolidation', underlying the transition from recent to remote memory, is implemented by ongoing interactions between brain regions. Finally, I propose a novel role for astrocytes in recent and remote memory.
My Specific Objectives are to: 1) Provide multi-dimensional characterization of the neuronal ensembles supporting recent and remote memory, by using activity-based tagging to show how recent and remote recall ensembles differ in activity, connectivity and stability. 2) Perturb the functional connectivity underlying 'systems consolidation' by employing connectivity-based tagging to label specific hippocampal and cortical projection neurons, image their activity during recent and remote memory, and causally demonstrate their functional significance to systems consolidation. 3) Determine the role of astrocytes in recent and remote memory consolidation and retrieval. We will manipulate astrocytes to show their role in recent and remote memory, ensemble allocation, and long-distance communication between neuronal populations. We will image astrocytic activity during a memory task to test if they can independently encode memory features, and how their activity corresponds to that of the neurons around them.
This pioneering ERC project, comprised of innovative and ambitious experiments going far and beyond the state of the art in the field, will drive considerable progress to our contemporary understanding of the transition from recent to remote memory, identifying ensemble dynamics and critical projections and how they are modulated by astrocytes.
Max ERC Funding
1 637 500 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym RetinalRepurposing
Project Deciphering the computations underlying visual processing: Repurposing of retinal cells and how they are decoded by the visual thalamus
Researcher (PI) Michal RIVLIN
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), LS5, ERC-2017-STG
Summary Visual processing begins in the retina, with ~30 types of retinal ganglion cells (RGCs), each encodes a specific visual modality, such as edges or motion. A major challenge for deciphering the visual code is mapping the connections between each RGC and its target neurons, such as in the lateral geniculate nucleus (LGN) and, via the LGN, the visual cortex. Recently, this challenge became even greater, as we and others revealed that the modality encoded by RGCs—traditionally considered a fixed hardwired property of each RGC type—can be altered. Direction selective RGCs reorient their directional tuning following visual adaptation, and other RGCs change their polarity preference (On/Off) as light level changes. These newly discovered dramatic changes in the core computations of RGCs depart from the known retinal adaptation which results in gain adjustments but no modality changes. We term them repurposing.
The discovery of repurposing contrasts the widely-held notion that the retina provides a stable representation of the visual scene for downstream processing. This newly exposed level of complexity in the retinal code raises a critical question for our understanding of vision: How do retinal targets interpret the dynamic retinal code and how, despite such dynamics, a consistent representation of the visual scene emerges?
We will use state-of-the-art electrophysiology and imaging techniques, and pioneer an approach for simultaneous retinal imaging and LGN recordings, to reveal how the mouse early visual system processes the changing visual information. We will elucidate the subtypes of repurposed RGCs, triggers for repurposing and their mechanisms. We will resolve, for the first time, the precise functional connectivity between subtypes of RGCs and LGN neurons, and determine how the LGN decodes retinal repurposing. Our groundbreaking research will pave the path for understanding how visual processing in a constantly changing world gives rise to a consistent perception.
Summary
Visual processing begins in the retina, with ~30 types of retinal ganglion cells (RGCs), each encodes a specific visual modality, such as edges or motion. A major challenge for deciphering the visual code is mapping the connections between each RGC and its target neurons, such as in the lateral geniculate nucleus (LGN) and, via the LGN, the visual cortex. Recently, this challenge became even greater, as we and others revealed that the modality encoded by RGCs—traditionally considered a fixed hardwired property of each RGC type—can be altered. Direction selective RGCs reorient their directional tuning following visual adaptation, and other RGCs change their polarity preference (On/Off) as light level changes. These newly discovered dramatic changes in the core computations of RGCs depart from the known retinal adaptation which results in gain adjustments but no modality changes. We term them repurposing.
The discovery of repurposing contrasts the widely-held notion that the retina provides a stable representation of the visual scene for downstream processing. This newly exposed level of complexity in the retinal code raises a critical question for our understanding of vision: How do retinal targets interpret the dynamic retinal code and how, despite such dynamics, a consistent representation of the visual scene emerges?
We will use state-of-the-art electrophysiology and imaging techniques, and pioneer an approach for simultaneous retinal imaging and LGN recordings, to reveal how the mouse early visual system processes the changing visual information. We will elucidate the subtypes of repurposed RGCs, triggers for repurposing and their mechanisms. We will resolve, for the first time, the precise functional connectivity between subtypes of RGCs and LGN neurons, and determine how the LGN decodes retinal repurposing. Our groundbreaking research will pave the path for understanding how visual processing in a constantly changing world gives rise to a consistent perception.
Max ERC Funding
1 494 000 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym RMAST
Project Random Models in Arithmetic and Spectral Theory
Researcher (PI) Zeev Rudnick
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE1, ERC-2017-ADG
Summary The proposal studies deterministic problems in the spectral theory of the Laplacian and in analytic number theory by using random models. I propose two projects in spectral theory on this theme, both with a strong arithmetic ingredient, the first about minimal gaps between the eigenvalues of the Laplacian, where I seek a fit with the corresponding quantities for the various conjectured universality classes (Poisson/GUE/GOE), and the second about curvature measures of nodal lines of eigenfunctions of the Laplacian, where I seek to determine the size of the curvature measures for the large eigenvalue limit. The third project originates in analytic number theory, on angular distribution of prime ideals in number fields, function field analogues and connections with Random Matrix Theory, where I raise new conjectures and problems on a very classical subject, and aim to resolve them at least in the function field setting.
Summary
The proposal studies deterministic problems in the spectral theory of the Laplacian and in analytic number theory by using random models. I propose two projects in spectral theory on this theme, both with a strong arithmetic ingredient, the first about minimal gaps between the eigenvalues of the Laplacian, where I seek a fit with the corresponding quantities for the various conjectured universality classes (Poisson/GUE/GOE), and the second about curvature measures of nodal lines of eigenfunctions of the Laplacian, where I seek to determine the size of the curvature measures for the large eigenvalue limit. The third project originates in analytic number theory, on angular distribution of prime ideals in number fields, function field analogues and connections with Random Matrix Theory, where I raise new conjectures and problems on a very classical subject, and aim to resolve them at least in the function field setting.
Max ERC Funding
1 840 625 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym SensStabComp
Project Sensitivity, Stability, and Computation
Researcher (PI) Gil KALAI
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas.
The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various
models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988),
is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods.
Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing.
The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Summary
Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas.
The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various
models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988),
is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods.
Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing.
The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Max ERC Funding
1 754 500 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym SVIS
Project Supervised Verification of Infinite-State Systems
Researcher (PI) Sharon Shoham Buchbinder
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Modern society relies more and more on computing for managing complex safety-critical tasks (e.g., in medicine, avionics, economy). Correctness of computerized systems is therefore crucial, as incorrect behaviors might lead to disastrous outcomes. Still, correctness is a big open problem without satisfactory theoretical or practical solutions. This has dramatic effects on the security of our lives. The current practice in industry mainly employs testing which detects bugs but cannot ensure their absence. In contrast, formal verification can provide formal correctness guarantees. Unfortunately, existing formal methods are still limited in their applicability to real world systems since most of them are either too automatic, hindered by the undecidability of verification, or too manual, relying on substantial human effort. This proposal will introduce a new methodology of supervised verification, based on which, we will develop novel approaches that can be used to formally verify certain realistic systems. This will be achieved by dividing the verification process into tasks that are well suited for automation, and tasks that are best done by a human supervisor, and finding a suitable mode of interaction between the human and the machine. Supervised verification represents a conceptual leap as it is centered around automatic procedures but complements them with human guidance; It therefore breaks the classical pattern of automated verification, and creates new opportunities, both practical and theoretical. In addition to opening the way for developing tractable verification algorithms, it can be used to prove lower and upper bounds on the asymptotic complexity of verification by explicitly distilling the human's role. The approaches developed by this research will significantly improve system correctness and agility. At the same time, they will reduce the cost of testing, increase developers productivity, and improve the overall understanding of computerized systems.
Summary
Modern society relies more and more on computing for managing complex safety-critical tasks (e.g., in medicine, avionics, economy). Correctness of computerized systems is therefore crucial, as incorrect behaviors might lead to disastrous outcomes. Still, correctness is a big open problem without satisfactory theoretical or practical solutions. This has dramatic effects on the security of our lives. The current practice in industry mainly employs testing which detects bugs but cannot ensure their absence. In contrast, formal verification can provide formal correctness guarantees. Unfortunately, existing formal methods are still limited in their applicability to real world systems since most of them are either too automatic, hindered by the undecidability of verification, or too manual, relying on substantial human effort. This proposal will introduce a new methodology of supervised verification, based on which, we will develop novel approaches that can be used to formally verify certain realistic systems. This will be achieved by dividing the verification process into tasks that are well suited for automation, and tasks that are best done by a human supervisor, and finding a suitable mode of interaction between the human and the machine. Supervised verification represents a conceptual leap as it is centered around automatic procedures but complements them with human guidance; It therefore breaks the classical pattern of automated verification, and creates new opportunities, both practical and theoretical. In addition to opening the way for developing tractable verification algorithms, it can be used to prove lower and upper bounds on the asymptotic complexity of verification by explicitly distilling the human's role. The approaches developed by this research will significantly improve system correctness and agility. At the same time, they will reduce the cost of testing, increase developers productivity, and improve the overall understanding of computerized systems.
Max ERC Funding
1 499 528 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym THUNDEEP
Project A Theory for Understanding, Designing, and Training Deep Learning Systems
Researcher (PI) Ohad SHAMIR
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary The rise of deep learning, in the form of artificial neural networks, has been the most dramatic and important development in machine learning over the past decade. Much more than a merely academic topic, deep learning is currently being widely adopted in industry, placed inside commercial products, and is expected to play a key role in anticipated technological leaps such as autonomous driving and general-purpose artificial intelligence. However, our scientific understanding of deep learning is woefully incomplete. Most methods to design and train these systems are based on rules-of-thumb and heuristics, and there is a drastic theory-practice gap in our understanding of why these systems work in practice. We believe this poses a significant risk to the long-term health of the field, as well as an obstacle to widening the applicability of deep learning beyond what has been achieved with current methods.
Our goal is to tackle head-on this important problem, and develop principled tools for understanding, designing, and training deep learning systems, based on rigorous theoretical results.
Our approach is to focus on three inter-related sources of performance losses in neural networks learning: Their optimization error (that is, how to train a given network in a computationally efficient manner); their estimation error (how to ensure that training a network on a finite training set will ensure good performance on future examples); and their approximation error (how architectural choices of the networks affect the type of functions they can compute). For each of these problems, we show how recent advances allow us to effectively approach them, and describe concrete preliminary results and ideas, which will serve as starting points and indicate the feasibility of this challenging project.
Summary
The rise of deep learning, in the form of artificial neural networks, has been the most dramatic and important development in machine learning over the past decade. Much more than a merely academic topic, deep learning is currently being widely adopted in industry, placed inside commercial products, and is expected to play a key role in anticipated technological leaps such as autonomous driving and general-purpose artificial intelligence. However, our scientific understanding of deep learning is woefully incomplete. Most methods to design and train these systems are based on rules-of-thumb and heuristics, and there is a drastic theory-practice gap in our understanding of why these systems work in practice. We believe this poses a significant risk to the long-term health of the field, as well as an obstacle to widening the applicability of deep learning beyond what has been achieved with current methods.
Our goal is to tackle head-on this important problem, and develop principled tools for understanding, designing, and training deep learning systems, based on rigorous theoretical results.
Our approach is to focus on three inter-related sources of performance losses in neural networks learning: Their optimization error (that is, how to train a given network in a computationally efficient manner); their estimation error (how to ensure that training a network on a finite training set will ensure good performance on future examples); and their approximation error (how architectural choices of the networks affect the type of functions they can compute). For each of these problems, we show how recent advances allow us to effectively approach them, and describe concrete preliminary results and ideas, which will serve as starting points and indicate the feasibility of this challenging project.
Max ERC Funding
1 442 360 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym UncertainENV
Project The Power of Randomization in Uncertain Environments
Researcher (PI) Shiri Chechik
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.
In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g., social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.
Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.
The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.
In this proposal I will investigate the power of randomization in uncertain environments.
I will focus on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.
Summary
Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.
In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g., social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.
Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.
The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.
In this proposal I will investigate the power of randomization in uncertain environments.
I will focus on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.
Max ERC Funding
1 500 000 €
Duration
Start date: 2019-10-01, End date: 2024-09-30
Project acronym VERICOMP
Project Foundations of Verifiable Computing
Researcher (PI) Guy ROTHBLUM
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Proof systems allow a weak verifier to ascertain the correctness of complex computational statements. Efficiently-verifiable proof systems are fundamental objects in the study of computation, and have led to some of the deepest and most celebrated insights in cryptography and in complexity theory.
The vast and rich literature on proof systems focuses primarily on proving the correctness of intractable statements, e.g. ones that are NP-complete. While the verification can be efficient, the proofs themselves cannot be generated in polynomial time. This limits the applicability of such proof systems, both from a theoretical perspective and in their real-world impact. This proposal aims to obtain a comprehensive understanding of proof systems with polynomial-time proof generation, to explore their practical applicability, and to investigate their connections with foundational questions in cryptography and in complexity theory.
Our study will focus primarily on interactive proof systems for tractable computations. The proposed research aims to revolutionize our understanding of these foundational objects by providing a complete and tight characterization of the complexity or proving and verifying general statements, by achieving breakthroughs in the study of related proof system notions, such as cryptographic arguments, and by building a fine-grained “algorithmic” theory of proof systems for central polynomial-time computational problems.
Our research will leverage these advances towards diverse applications: from real-world security challenges, such as verifying the correctness of computations performed by the cloud and cryptographic “proofs of work”, to a complexity-theoretic understanding of the complexity of approximating problems in P and of solving them on random instances.
Summary
Proof systems allow a weak verifier to ascertain the correctness of complex computational statements. Efficiently-verifiable proof systems are fundamental objects in the study of computation, and have led to some of the deepest and most celebrated insights in cryptography and in complexity theory.
The vast and rich literature on proof systems focuses primarily on proving the correctness of intractable statements, e.g. ones that are NP-complete. While the verification can be efficient, the proofs themselves cannot be generated in polynomial time. This limits the applicability of such proof systems, both from a theoretical perspective and in their real-world impact. This proposal aims to obtain a comprehensive understanding of proof systems with polynomial-time proof generation, to explore their practical applicability, and to investigate their connections with foundational questions in cryptography and in complexity theory.
Our study will focus primarily on interactive proof systems for tractable computations. The proposed research aims to revolutionize our understanding of these foundational objects by providing a complete and tight characterization of the complexity or proving and verifying general statements, by achieving breakthroughs in the study of related proof system notions, such as cryptographic arguments, and by building a fine-grained “algorithmic” theory of proof systems for central polynomial-time computational problems.
Our research will leverage these advances towards diverse applications: from real-world security challenges, such as verifying the correctness of computations performed by the cloud and cryptographic “proofs of work”, to a complexity-theoretic understanding of the complexity of approximating problems in P and of solving them on random instances.
Max ERC Funding
1 882 460 €
Duration
Start date: 2019-01-01, End date: 2023-12-31