Project acronym 20SComplexity
Project An integrative approach to uncover the multilevel regulation of 20S proteasome degradation
Researcher (PI) Michal Sharon
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), LS1, ERC-2014-STG
Summary For many years, the ubiquitin-26S proteasome degradation pathway was considered the primary route for proteasomal degradation. However, it is now becoming clear that proteins can also be targeted for degradation by a ubiquitin-independent mechanism mediated by the core 20S proteasome itself. Although initially believed to be limited to rare exceptions, degradation by the 20S proteasome is now understood to have a wide range of substrates, many of which are key regulatory proteins. Despite its importance, little is known about the mechanisms that control 20S proteasomal degradation, unlike the extensive knowledge acquired over the years concerning degradation by the 26S proteasome. Our overall aim is to reveal the multiple regulatory levels that coordinate the 20S proteasome degradation route.
To achieve this goal we will carry out a comprehensive research program characterizing three distinct levels of 20S proteasome regulation:
Intra-molecular regulation- Revealing the intrinsic molecular switch that activates the latent 20S proteasome.
Inter-molecular regulation- Identifying novel proteins that bind the 20S proteasome to regulate its activity and characterizing their mechanism of function.
Cellular regulatory networks- Unraveling the cellular cues and multiple pathways that influence 20S proteasome activity using a novel systematic and unbiased screening approach.
Our experimental strategy involves the combination of biochemical approaches with native mass spectrometry, cross-linking and fluorescence measurements, complemented by cell biology analyses and high-throughput screening. Such a multidisciplinary approach, integrating in vitro and in vivo findings, will likely provide the much needed knowledge on the 20S proteasome degradation route. When completed, we anticipate that this work will be part of a new paradigm – no longer perceiving the 20S proteasome mediated degradation as a simple and passive event but rather a tightly regulated and coordinated process.
Summary
For many years, the ubiquitin-26S proteasome degradation pathway was considered the primary route for proteasomal degradation. However, it is now becoming clear that proteins can also be targeted for degradation by a ubiquitin-independent mechanism mediated by the core 20S proteasome itself. Although initially believed to be limited to rare exceptions, degradation by the 20S proteasome is now understood to have a wide range of substrates, many of which are key regulatory proteins. Despite its importance, little is known about the mechanisms that control 20S proteasomal degradation, unlike the extensive knowledge acquired over the years concerning degradation by the 26S proteasome. Our overall aim is to reveal the multiple regulatory levels that coordinate the 20S proteasome degradation route.
To achieve this goal we will carry out a comprehensive research program characterizing three distinct levels of 20S proteasome regulation:
Intra-molecular regulation- Revealing the intrinsic molecular switch that activates the latent 20S proteasome.
Inter-molecular regulation- Identifying novel proteins that bind the 20S proteasome to regulate its activity and characterizing their mechanism of function.
Cellular regulatory networks- Unraveling the cellular cues and multiple pathways that influence 20S proteasome activity using a novel systematic and unbiased screening approach.
Our experimental strategy involves the combination of biochemical approaches with native mass spectrometry, cross-linking and fluorescence measurements, complemented by cell biology analyses and high-throughput screening. Such a multidisciplinary approach, integrating in vitro and in vivo findings, will likely provide the much needed knowledge on the 20S proteasome degradation route. When completed, we anticipate that this work will be part of a new paradigm – no longer perceiving the 20S proteasome mediated degradation as a simple and passive event but rather a tightly regulated and coordinated process.
Max ERC Funding
1 500 000 €
Duration
Start date: 2015-04-01, End date: 2020-03-31
Project acronym 5D-NanoTrack
Project Five-Dimensional Localization Microscopy for Sub-Cellular Dynamics
Researcher (PI) Yoav SHECHTMAN
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE7, ERC-2018-STG
Summary The sub-cellular processes that control the most critical aspects of life occur in three-dimensions (3D), and are intrinsically dynamic. While super-resolution microscopy has revolutionized cellular imaging in recent years, our current capability to observe the dynamics of life on the nanoscale is still extremely limited, due to inherent trade-offs between spatial, temporal and spectral resolution using existing approaches.
We propose to develop and demonstrate an optical microscopy methodology that would enable live sub-cellular observation in unprecedented detail. Making use of multicolor 3D point-spread-function (PSF) engineering, a technique I have recently developed, we will be able to simultaneously track multiple markers inside live cells, at high speed and in five-dimensions (3D, time, and color).
Multicolor 3D PSF engineering holds the potential of being a uniquely powerful method for 5D tracking. However, it is not yet applicable to live-cell imaging, due to significant bottlenecks in optical engineering and signal processing, which we plan to overcome in this project. Importantly, we will also demonstrate the efficacy of our method using a challenging biological application: real-time visualization of chromatin dynamics - the spatiotemporal organization of DNA. This is a highly suitable problem due to its fundamental importance, its role in a variety of cellular processes, and the lack of appropriate tools for studying it.
The project is divided into 3 aims:
1. Technology development: diffractive-element design for multicolor 3D PSFs.
2. System design: volumetric tracking of dense emitters.
3. Live-cell measurements: chromatin dynamics.
Looking ahead, here we create the imaging tools that pave the way towards the holy grail of chromatin visualization: dynamic observation of the 3D positions of the ~3 billion DNA base-pairs in a live human cell. Beyond that, our results will be applicable to numerous 3D micro/nanoscale tracking applications.
Summary
The sub-cellular processes that control the most critical aspects of life occur in three-dimensions (3D), and are intrinsically dynamic. While super-resolution microscopy has revolutionized cellular imaging in recent years, our current capability to observe the dynamics of life on the nanoscale is still extremely limited, due to inherent trade-offs between spatial, temporal and spectral resolution using existing approaches.
We propose to develop and demonstrate an optical microscopy methodology that would enable live sub-cellular observation in unprecedented detail. Making use of multicolor 3D point-spread-function (PSF) engineering, a technique I have recently developed, we will be able to simultaneously track multiple markers inside live cells, at high speed and in five-dimensions (3D, time, and color).
Multicolor 3D PSF engineering holds the potential of being a uniquely powerful method for 5D tracking. However, it is not yet applicable to live-cell imaging, due to significant bottlenecks in optical engineering and signal processing, which we plan to overcome in this project. Importantly, we will also demonstrate the efficacy of our method using a challenging biological application: real-time visualization of chromatin dynamics - the spatiotemporal organization of DNA. This is a highly suitable problem due to its fundamental importance, its role in a variety of cellular processes, and the lack of appropriate tools for studying it.
The project is divided into 3 aims:
1. Technology development: diffractive-element design for multicolor 3D PSFs.
2. System design: volumetric tracking of dense emitters.
3. Live-cell measurements: chromatin dynamics.
Looking ahead, here we create the imaging tools that pave the way towards the holy grail of chromatin visualization: dynamic observation of the 3D positions of the ~3 billion DNA base-pairs in a live human cell. Beyond that, our results will be applicable to numerous 3D micro/nanoscale tracking applications.
Max ERC Funding
1 802 500 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym ABDESIGN
Project Computational design of novel protein function in antibodies
Researcher (PI) Sarel-Jacob Fleishman
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), LS1, ERC-2013-StG
Summary We propose to elucidate the structural design principles of naturally occurring antibody complementarity-determining regions (CDRs) and to computationally design novel antibody functions. Antibodies represent the most versatile known system for molecular recognition. Research has yielded many insights into antibody design principles and promising biotechnological and pharmaceutical applications. Still, our understanding of how CDRs encode specific loop conformations lags far behind our understanding of structure-function relationships in non-immunological scaffolds. Thus, design of antibodies from first principles has not been demonstrated. We propose a computational-experimental strategy to address this challenge. We will: (a) characterize the design principles and sequence elements that rigidify antibody CDRs. Natural antibody loops will be subjected to computational modeling, crystallography, and a combined in vitro evolution and deep-sequencing approach to isolate sequence features that rigidify loop backbones; (b) develop a novel computational-design strategy, which uses the >1000 solved structures of antibodies deposited in structure databases to realistically model CDRs and design them to recognize proteins that have not been co-crystallized with antibodies. For example, we will design novel antibodies targeting insulin, for which clinically useful diagnostics are needed. By accessing much larger sequence/structure spaces than are available to natural immune-system repertoires and experimental methods, computational antibody design could produce higher-specificity and higher-affinity binders, even to challenging targets; and (c) develop new strategies to program conformational change in CDRs, generating, e.g., the first allosteric antibodies. These will allow targeting, in principle, of any molecule, potentially revolutionizing how antibodies are generated for research and medicine, providing new insights on the design principles of protein functional sites.
Summary
We propose to elucidate the structural design principles of naturally occurring antibody complementarity-determining regions (CDRs) and to computationally design novel antibody functions. Antibodies represent the most versatile known system for molecular recognition. Research has yielded many insights into antibody design principles and promising biotechnological and pharmaceutical applications. Still, our understanding of how CDRs encode specific loop conformations lags far behind our understanding of structure-function relationships in non-immunological scaffolds. Thus, design of antibodies from first principles has not been demonstrated. We propose a computational-experimental strategy to address this challenge. We will: (a) characterize the design principles and sequence elements that rigidify antibody CDRs. Natural antibody loops will be subjected to computational modeling, crystallography, and a combined in vitro evolution and deep-sequencing approach to isolate sequence features that rigidify loop backbones; (b) develop a novel computational-design strategy, which uses the >1000 solved structures of antibodies deposited in structure databases to realistically model CDRs and design them to recognize proteins that have not been co-crystallized with antibodies. For example, we will design novel antibodies targeting insulin, for which clinically useful diagnostics are needed. By accessing much larger sequence/structure spaces than are available to natural immune-system repertoires and experimental methods, computational antibody design could produce higher-specificity and higher-affinity binders, even to challenging targets; and (c) develop new strategies to program conformational change in CDRs, generating, e.g., the first allosteric antibodies. These will allow targeting, in principle, of any molecule, potentially revolutionizing how antibodies are generated for research and medicine, providing new insights on the design principles of protein functional sites.
Max ERC Funding
1 499 930 €
Duration
Start date: 2013-09-01, End date: 2018-08-31
Project acronym AEROBIC
Project Assessing the Effects of Rising O2 on Biogeochemical Cycles: Integrated Laboratory Experiments and Numerical Simulations
Researcher (PI) Itay Halevy
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE10, ERC-2013-StG
Summary The rise of atmospheric O2 ~2,500 million years ago is one of the most profound transitions in Earth's history. Yet, despite its central role in shaping Earth's surface environment, the cause for the rise of O2 remains poorly understood. Tight coupling between the O2 cycle and the biogeochemical cycles of redox-active elements, such as C, Fe and S, implies radical changes in these cycles before, during and after the rise of O2. These changes, too, are incompletely understood, but have left valuable information encoded in the geological record. This information has been qualitatively interpreted, leaving many aspects of the rise of O2, including its causes and constraints on ocean chemistry before and after it, topics of ongoing research and debate. Here, I outline a research program to address this fundamental question in geochemical Earth systems evolution. The inherently interdisciplinary program uniquely integrates laboratory experiments, numerical models, geological observations, and geochemical analyses. Laboratory experiments and geological observations will constrain unknown parameters of the early biogeochemical cycles, and, in combination with field studies, will validate and refine the use of paleoenvironmental proxies. The insight gained will be used to develop detailed models of the coupled biogeochemical cycles, which will themselves be used to quantitatively understand the events surrounding the rise of O2, and to illuminate the dynamics of elemental cycles in the early oceans.
This program is expected to yield novel, quantitative insight into these important events in Earth history and to have a major impact on our understanding of early ocean chemistry and the rise of O2. An ERC Starting Grant will enable me to use the excellent experimental and computational facilities at my disposal, to access the outstanding human resource at the Weizmann Institute of Science, and to address one of the major open questions in modern geochemistry.
Summary
The rise of atmospheric O2 ~2,500 million years ago is one of the most profound transitions in Earth's history. Yet, despite its central role in shaping Earth's surface environment, the cause for the rise of O2 remains poorly understood. Tight coupling between the O2 cycle and the biogeochemical cycles of redox-active elements, such as C, Fe and S, implies radical changes in these cycles before, during and after the rise of O2. These changes, too, are incompletely understood, but have left valuable information encoded in the geological record. This information has been qualitatively interpreted, leaving many aspects of the rise of O2, including its causes and constraints on ocean chemistry before and after it, topics of ongoing research and debate. Here, I outline a research program to address this fundamental question in geochemical Earth systems evolution. The inherently interdisciplinary program uniquely integrates laboratory experiments, numerical models, geological observations, and geochemical analyses. Laboratory experiments and geological observations will constrain unknown parameters of the early biogeochemical cycles, and, in combination with field studies, will validate and refine the use of paleoenvironmental proxies. The insight gained will be used to develop detailed models of the coupled biogeochemical cycles, which will themselves be used to quantitatively understand the events surrounding the rise of O2, and to illuminate the dynamics of elemental cycles in the early oceans.
This program is expected to yield novel, quantitative insight into these important events in Earth history and to have a major impact on our understanding of early ocean chemistry and the rise of O2. An ERC Starting Grant will enable me to use the excellent experimental and computational facilities at my disposal, to access the outstanding human resource at the Weizmann Institute of Science, and to address one of the major open questions in modern geochemistry.
Max ERC Funding
1 472 690 €
Duration
Start date: 2013-09-01, End date: 2018-08-31
Project acronym AGALT
Project Asymptotic Geometric Analysis and Learning Theory
Researcher (PI) Shahar Mendelson
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE1, ERC-2007-StG
Summary In a typical learning problem one tries to approximate an unknown function by a function from a given class using random data, sampled according to an unknown measure. In this project we will be interested in parameters that govern the complexity of a learning problem. It turns out that this complexity is determined by the geometry of certain sets in high dimension that are connected to the given class (random coordinate projections of the class). Thus, one has to understand the structure of these sets as a function of the dimension - which is given by the cardinality of the random sample. The resulting analysis leads to many theoretical questions in Asymptotic Geometric Analysis, Probability (most notably, Empirical Processes Theory) and Combinatorics, which are of independent interest beyond the application to Learning Theory. Our main goal is to describe the role of various complexity parameters involved in a learning problem, to analyze the connections between them and to investigate the way they determine the geometry of the relevant high dimensional sets. Some of the questions we intend to tackle are well known open problems and making progress towards their solution will have a significant theoretical impact. Moreover, this project should lead to a more complete theory of learning and is likely to have some practical impact, for example, in the design of more efficient learning algorithms.
Summary
In a typical learning problem one tries to approximate an unknown function by a function from a given class using random data, sampled according to an unknown measure. In this project we will be interested in parameters that govern the complexity of a learning problem. It turns out that this complexity is determined by the geometry of certain sets in high dimension that are connected to the given class (random coordinate projections of the class). Thus, one has to understand the structure of these sets as a function of the dimension - which is given by the cardinality of the random sample. The resulting analysis leads to many theoretical questions in Asymptotic Geometric Analysis, Probability (most notably, Empirical Processes Theory) and Combinatorics, which are of independent interest beyond the application to Learning Theory. Our main goal is to describe the role of various complexity parameters involved in a learning problem, to analyze the connections between them and to investigate the way they determine the geometry of the relevant high dimensional sets. Some of the questions we intend to tackle are well known open problems and making progress towards their solution will have a significant theoretical impact. Moreover, this project should lead to a more complete theory of learning and is likely to have some practical impact, for example, in the design of more efficient learning algorithms.
Max ERC Funding
750 000 €
Duration
Start date: 2009-03-01, End date: 2014-02-28
Project acronym AMD
Project Algorithmic Mechanism Design: Beyond Truthful Mechanisms
Researcher (PI) Michal Feldman
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2013-StG
Summary "The first decade of Algorithmic Mechanism Design (AMD) concentrated, very successfully, on the design of truthful mechanisms for the allocation of resources among agents with private preferences.
Truthful mechanisms are ones that incentivize rational users to report their preferences truthfully.
Truthfulness, however, for all its theoretical appeal, suffers from several inherent limitations, mainly its high communication and computation complexities.
It is not surprising, therefore, that practical applications forego truthfulness and use simpler mechanisms instead.
Simplicity in itself, however, is not sufficient, as any meaningful mechanism should also have some notion of fairness; otherwise agents will stop using it over time.
In this project I plan to develop an innovative AMD theoretical framework that will go beyond truthfulness and focus instead on the natural themes of simplicity and fairness, in addition to computational tractability.
One of my primary goals will be the design of simple and fair poly-time mechanisms that perform at near optimal levels with respect to important economic objectives such as social welfare and revenue.
To this end, I will work toward providing precise definitions of simplicity and fairness and quantifying the effects of these restrictions on the performance levels that can be obtained.
A major challenge in the evaluation of non-truthful mechanisms is defining a reasonable behavior model that will enable their evaluation.
The success of this project could have a broad impact on Europe and beyond, as it would guide the design of natural mechanisms for markets of tens of billions of dollars in revenue, such as online advertising, or sales of wireless frequencies.
The timing of this project is ideal, as the AMD field is now sufficiently mature to lead to a breakthrough and at the same time young enough to be receptive to new approaches and themes."
Summary
"The first decade of Algorithmic Mechanism Design (AMD) concentrated, very successfully, on the design of truthful mechanisms for the allocation of resources among agents with private preferences.
Truthful mechanisms are ones that incentivize rational users to report their preferences truthfully.
Truthfulness, however, for all its theoretical appeal, suffers from several inherent limitations, mainly its high communication and computation complexities.
It is not surprising, therefore, that practical applications forego truthfulness and use simpler mechanisms instead.
Simplicity in itself, however, is not sufficient, as any meaningful mechanism should also have some notion of fairness; otherwise agents will stop using it over time.
In this project I plan to develop an innovative AMD theoretical framework that will go beyond truthfulness and focus instead on the natural themes of simplicity and fairness, in addition to computational tractability.
One of my primary goals will be the design of simple and fair poly-time mechanisms that perform at near optimal levels with respect to important economic objectives such as social welfare and revenue.
To this end, I will work toward providing precise definitions of simplicity and fairness and quantifying the effects of these restrictions on the performance levels that can be obtained.
A major challenge in the evaluation of non-truthful mechanisms is defining a reasonable behavior model that will enable their evaluation.
The success of this project could have a broad impact on Europe and beyond, as it would guide the design of natural mechanisms for markets of tens of billions of dollars in revenue, such as online advertising, or sales of wireless frequencies.
The timing of this project is ideal, as the AMD field is now sufficiently mature to lead to a breakthrough and at the same time young enough to be receptive to new approaches and themes."
Max ERC Funding
1 394 600 €
Duration
Start date: 2013-11-01, End date: 2018-10-31
Project acronym ANYONIC
Project Statistics of Exotic Fractional Hall States
Researcher (PI) Mordehai HEIBLUM
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE3, ERC-2018-ADG
Summary Since their discovery, Quantum Hall Effects have unfolded intriguing avenues of research, exhibiting a multitude of unexpected exotic states: accurate quantized conductance states; particle-like and hole-conjugate fractional states; counter-propagating charge and neutral edge modes; and fractionally charged quasiparticles - abelian and (predicted) non-abelian. Since the sought-after anyonic statistics of fractional states is yet to be verified, I propose to launch a thorough search for it employing new means. I believe that our studies will serve the expanding field of the emerging family of topological materials.
Our on-going attempts to observe quasiparticles (qp’s) interference, in order to uncover their exchange statistics (under ERC), taught us that spontaneous, non-topological, ‘neutral edge modes’ are the main culprit responsible for qp’s dephasing. In an effort to quench the neutral modes, we plan to develop a new class of micro-size interferometers, based on synthetically engineered fractional modes. Flowing away from the fixed physical edge, their local environment can be controlled, making it less hospitable for the neutral modes.
Having at hand our synthetized helical-type fractional modes, it is highly tempting to employ them to form localize para-fermions, which will extend the family of exotic states. This can be done by proximitizing them to a superconductor, or gapping them via inter-mode coupling.
The less familiar thermal conductance measurements, which we recently developed (under ERC), will be applied throughout our work to identify ‘topological orders’ of exotic states; namely, distinguishing between abelian and non-abelian fractional states.
The proposal is based on an intensive and continuous MBE effort, aimed at developing extremely high purity, GaAs based, structures. Among them, structures that support our new synthetic modes that are amenable to manipulation, and others that host rare exotic states, such as v=5/2, 12/5, 19/8, and 35/16.
Summary
Since their discovery, Quantum Hall Effects have unfolded intriguing avenues of research, exhibiting a multitude of unexpected exotic states: accurate quantized conductance states; particle-like and hole-conjugate fractional states; counter-propagating charge and neutral edge modes; and fractionally charged quasiparticles - abelian and (predicted) non-abelian. Since the sought-after anyonic statistics of fractional states is yet to be verified, I propose to launch a thorough search for it employing new means. I believe that our studies will serve the expanding field of the emerging family of topological materials.
Our on-going attempts to observe quasiparticles (qp’s) interference, in order to uncover their exchange statistics (under ERC), taught us that spontaneous, non-topological, ‘neutral edge modes’ are the main culprit responsible for qp’s dephasing. In an effort to quench the neutral modes, we plan to develop a new class of micro-size interferometers, based on synthetically engineered fractional modes. Flowing away from the fixed physical edge, their local environment can be controlled, making it less hospitable for the neutral modes.
Having at hand our synthetized helical-type fractional modes, it is highly tempting to employ them to form localize para-fermions, which will extend the family of exotic states. This can be done by proximitizing them to a superconductor, or gapping them via inter-mode coupling.
The less familiar thermal conductance measurements, which we recently developed (under ERC), will be applied throughout our work to identify ‘topological orders’ of exotic states; namely, distinguishing between abelian and non-abelian fractional states.
The proposal is based on an intensive and continuous MBE effort, aimed at developing extremely high purity, GaAs based, structures. Among them, structures that support our new synthetic modes that are amenable to manipulation, and others that host rare exotic states, such as v=5/2, 12/5, 19/8, and 35/16.
Max ERC Funding
1 801 094 €
Duration
Start date: 2019-05-01, End date: 2024-04-30
Project acronym ARITHQUANTUMCHAOS
Project Arithmetic and Quantum Chaos
Researcher (PI) Zeev Rudnick
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE1, ERC-2012-ADG_20120216
Summary Quantum Chaos is an emerging discipline which is crossing over from Physics into Pure Mathematics. The recent crossover is driven in part by a connection with Number Theory. This project explores several aspects of this interrelationship and is composed of a number of sub-projects. The sub-projects deal with: statistics of energy levels and wave functions of pseudo-integrable systems, a hitherto unexplored subject in the mathematical community which is not well understood in the physics community; with statistics of zeros of zeta functions over function fields, a purely number theoretic topic which is linked to the subproject on Quantum Chaos through the mysterious connections to Random Matrix Theory and an analogy between energy levels and zeta zeros; and with spatial statistics in arithmetic.
Summary
Quantum Chaos is an emerging discipline which is crossing over from Physics into Pure Mathematics. The recent crossover is driven in part by a connection with Number Theory. This project explores several aspects of this interrelationship and is composed of a number of sub-projects. The sub-projects deal with: statistics of energy levels and wave functions of pseudo-integrable systems, a hitherto unexplored subject in the mathematical community which is not well understood in the physics community; with statistics of zeros of zeta functions over function fields, a purely number theoretic topic which is linked to the subproject on Quantum Chaos through the mysterious connections to Random Matrix Theory and an analogy between energy levels and zeta zeros; and with spatial statistics in arithmetic.
Max ERC Funding
1 714 000 €
Duration
Start date: 2013-02-01, End date: 2019-01-31
Project acronym BANDWIDTH
Project The cost of limited communication bandwidth in distributed computing
Researcher (PI) Keren CENSOR-HILLEL
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Distributed systems underlie many modern technologies, a prime example being the Internet. The ever-increasing abundance of distributed systems necessitates their design and usage to be backed by strong theoretical foundations.
A major challenge that distributed systems face is the lack of a central authority, which brings many aspects of uncertainty into the environment, in the form of unknown network topology or unpredictable dynamic behavior. A practical restriction of distributed systems, which is at the heart of this proposal, is the limited bandwidth available for communication between the network components.
A central family of distributed tasks is that of local tasks, which are informally described as tasks which are possible to solve by sending information through only a relatively small number of hops. A cornerstone example is the need to break symmetry and provide a better utilization of resources, which can be obtained by the task of producing a valid coloring of the nodes given some small number of colors. Amazingly, there are still huge gaps between the known upper and lower bounds for the complexity of many local tasks. This holds even if one allows powerful assumptions of unlimited bandwidth. While some known algorithms indeed use small messages, the complexity gaps are even larger compared to the unlimited bandwidth case. This is not a mere coincidence, and in fact the existing theoretical infrastructure is provably incapable of
giving stronger lower bounds for many local tasks under limited bandwidth.
This proposal zooms in on this crucial blind spot in the current literature on the theory of distributed computing, namely, the study of local tasks under limited bandwidth. The goal of this research is to produce fast algorithms for fundamental distributed local tasks under restricted bandwidth, as well as understand their limitations by providing lower bounds.
Summary
Distributed systems underlie many modern technologies, a prime example being the Internet. The ever-increasing abundance of distributed systems necessitates their design and usage to be backed by strong theoretical foundations.
A major challenge that distributed systems face is the lack of a central authority, which brings many aspects of uncertainty into the environment, in the form of unknown network topology or unpredictable dynamic behavior. A practical restriction of distributed systems, which is at the heart of this proposal, is the limited bandwidth available for communication between the network components.
A central family of distributed tasks is that of local tasks, which are informally described as tasks which are possible to solve by sending information through only a relatively small number of hops. A cornerstone example is the need to break symmetry and provide a better utilization of resources, which can be obtained by the task of producing a valid coloring of the nodes given some small number of colors. Amazingly, there are still huge gaps between the known upper and lower bounds for the complexity of many local tasks. This holds even if one allows powerful assumptions of unlimited bandwidth. While some known algorithms indeed use small messages, the complexity gaps are even larger compared to the unlimited bandwidth case. This is not a mere coincidence, and in fact the existing theoretical infrastructure is provably incapable of
giving stronger lower bounds for many local tasks under limited bandwidth.
This proposal zooms in on this crucial blind spot in the current literature on the theory of distributed computing, namely, the study of local tasks under limited bandwidth. The goal of this research is to produce fast algorithms for fundamental distributed local tasks under restricted bandwidth, as well as understand their limitations by providing lower bounds.
Max ERC Funding
1 486 480 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym BeadsOnString
Project Beads on String Genomics: Experimental Toolbox for Unmasking Genetic / Epigenetic Variation in Genomic DNA and Chromatin
Researcher (PI) Yuval Ebenstein
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE4, ERC-2013-StG
Summary Next generation sequencing (NGS) is revolutionizing all fields of biological research but it fails to extract the full range of information associated with genetic material and is lacking in its ability to resolve variations between genomes. The high degree of genome variation exhibited both on the population level as well as between genetically “identical” cells (even in the same organ) makes genetic and epigenetic analysis on the single cell and single genome level a necessity.
Chromosomes may be conceptually represented as a linear one-dimensional barcode. However, in contrast to a traditional binary barcode approach that considers only two possible bits of information (1 & 0), I will use colour and molecular structure to expand the variety of information represented in the barcode. Like colourful beads threaded on a string, where each bead represents a distinct type of observable, I will label each type of genomic information with a different chemical moiety thus expanding the repertoire of information that can be simultaneously measured. A major effort in this proposal is invested in the development of unique chemistries to enable this labelling.
I specifically address three types of genomic variation: Variations in genomic layout (including DNA repeats, structural and copy number variations), variations in the patterns of chemical DNA modifications (such as methylation of cytosine bases) and variations in the chromatin composition (including nucleosome and transcription factor distributions). I will use physical extension of long DNA molecules on surfaces and in nanofluidic channels to reveal this information visually in the form of a linear, fluorescent “barcode” that is read-out by advanced imaging techniques. Similarly, DNA molecules will be threaded through a nanopore where the sequential position of “bulky” molecular groups attached to the DNA may be inferred from temporal modulation of an ionic current measured across the pore.
Summary
Next generation sequencing (NGS) is revolutionizing all fields of biological research but it fails to extract the full range of information associated with genetic material and is lacking in its ability to resolve variations between genomes. The high degree of genome variation exhibited both on the population level as well as between genetically “identical” cells (even in the same organ) makes genetic and epigenetic analysis on the single cell and single genome level a necessity.
Chromosomes may be conceptually represented as a linear one-dimensional barcode. However, in contrast to a traditional binary barcode approach that considers only two possible bits of information (1 & 0), I will use colour and molecular structure to expand the variety of information represented in the barcode. Like colourful beads threaded on a string, where each bead represents a distinct type of observable, I will label each type of genomic information with a different chemical moiety thus expanding the repertoire of information that can be simultaneously measured. A major effort in this proposal is invested in the development of unique chemistries to enable this labelling.
I specifically address three types of genomic variation: Variations in genomic layout (including DNA repeats, structural and copy number variations), variations in the patterns of chemical DNA modifications (such as methylation of cytosine bases) and variations in the chromatin composition (including nucleosome and transcription factor distributions). I will use physical extension of long DNA molecules on surfaces and in nanofluidic channels to reveal this information visually in the form of a linear, fluorescent “barcode” that is read-out by advanced imaging techniques. Similarly, DNA molecules will be threaded through a nanopore where the sequential position of “bulky” molecular groups attached to the DNA may be inferred from temporal modulation of an ionic current measured across the pore.
Max ERC Funding
1 627 600 €
Duration
Start date: 2013-10-01, End date: 2018-09-30
Project acronym BEAMING
Project Detecting massive-planet/brown-dwarf/low-mass-stellar companions with the beaming effect
Researcher (PI) Moshe Zvi Mazeh
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE9, ERC-2011-ADG_20110209
Summary "I propose to lead an international observational effort to characterize the population of massive planets, brown dwarf and stellar secondaries orbiting their parent stars with short periods, up to 10-30 days. The effort will utilize the superb, accurate, continuous lightcurves of more than hundred thousand stars obtained recently by two space missions – CoRoT and Kepler. I propose to use these lightcurves to detect non-transiting low-mass companions with a new algorithm, BEER, which I developed recently together with Simchon Faigler. BEER searches for the beaming effect, which causes the stellar intensity to increase if the star is moving towards the observer. The combination of the beaming effect with other modulations induced by a low-mass companion produces periodic modulation with a specific signature, which is used to detect small non-transiting companions. The accuracy of the space mission lightcurves is enough to detect massive planets with short periods. The proposed project is equivalent to a radial-velocity survey of tens of thousands of stars, instead of the presently active surveys which observe only hundreds of stars.
We will use an assortment of telescopes to perform radial velocity follow-up observations in order to confirm the existence of the detected companions, and to derive their masses and orbital eccentricities. We will discover many tens, if not hundreds, of new massive planets and brown dwarfs with short periods, and many thousands of new binaries. The findings will enable us to map the mass, period, and eccentricity distributions of planets and stellar companions, determine the upper mass of planets, understand the nature of the brown-dwarf desert, and put strong constrains on the theory of planet and binary formation and evolution."
Summary
"I propose to lead an international observational effort to characterize the population of massive planets, brown dwarf and stellar secondaries orbiting their parent stars with short periods, up to 10-30 days. The effort will utilize the superb, accurate, continuous lightcurves of more than hundred thousand stars obtained recently by two space missions – CoRoT and Kepler. I propose to use these lightcurves to detect non-transiting low-mass companions with a new algorithm, BEER, which I developed recently together with Simchon Faigler. BEER searches for the beaming effect, which causes the stellar intensity to increase if the star is moving towards the observer. The combination of the beaming effect with other modulations induced by a low-mass companion produces periodic modulation with a specific signature, which is used to detect small non-transiting companions. The accuracy of the space mission lightcurves is enough to detect massive planets with short periods. The proposed project is equivalent to a radial-velocity survey of tens of thousands of stars, instead of the presently active surveys which observe only hundreds of stars.
We will use an assortment of telescopes to perform radial velocity follow-up observations in order to confirm the existence of the detected companions, and to derive their masses and orbital eccentricities. We will discover many tens, if not hundreds, of new massive planets and brown dwarfs with short periods, and many thousands of new binaries. The findings will enable us to map the mass, period, and eccentricity distributions of planets and stellar companions, determine the upper mass of planets, understand the nature of the brown-dwarf desert, and put strong constrains on the theory of planet and binary formation and evolution."
Max ERC Funding
1 737 600 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym BeyondA1
Project Set theory beyond the first uncountable cardinal
Researcher (PI) Assaf Shmuel Rinot
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah.
While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions.
A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that.
To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success.
Summary
We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah.
While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions.
A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that.
To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success.
Max ERC Funding
1 362 500 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym BioMet
Project Selective Functionalization of Saturated Hydrocarbons
Researcher (PI) Ilan MAREK
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Advanced Grant (AdG), PE5, ERC-2017-ADG
Summary Despite that C–H functionalization represents a paradigm shift from the standard logic of organic synthesis, the selective activation of non-functionalized alkanes has puzzled chemists for centuries and is always referred to one of the remaining major challenges in chemical sciences. Alkanes are inert compounds representing the major constituents of natural gas and petroleum. Converting these cheap and widely available hydrocarbon feedstocks into added-value intermediates would tremendously affect the field of chemistry. For long saturated hydrocarbons, one must distinguish between non-equivalent but chemically very similar alkane substrate C−H bonds, and for functionalization at the terminus position, one must favor activation of the stronger, primary C−H bonds at the expense of weaker and numerous secondary C-H bonds. The goal of this work is to develop a general principle in organic synthesis for the preparation of a wide variety of more complex molecular architectures from saturated hydrocarbons. In our approach, the alkane will first be transformed into an alkene that will subsequently be engaged in a metal-catalyzed hydrometalation/migration sequence. The first step of the sequence, ideally represented by the removal of two hydrogen atoms, will be performed by the use of a mutated strain of Rhodococcus. The position and geometry of the formed double bond has no effect on the second step of the reaction as the metal-catalyzed hydrometalation/migration will isomerize the double bond along the carbon skeleton to selectively produce the primary organometallic species. Trapping the resulting organometallic derivatives with a large variety of electrophiles will provide the desired functionalized alkane. This work will lead to the invention of new, selective and efficient processes for the utilization of simple hydrocarbons and valorize the synthetic potential of raw hydrocarbon feedstock for the environmentally benign production of new compounds and new materials.
Summary
Despite that C–H functionalization represents a paradigm shift from the standard logic of organic synthesis, the selective activation of non-functionalized alkanes has puzzled chemists for centuries and is always referred to one of the remaining major challenges in chemical sciences. Alkanes are inert compounds representing the major constituents of natural gas and petroleum. Converting these cheap and widely available hydrocarbon feedstocks into added-value intermediates would tremendously affect the field of chemistry. For long saturated hydrocarbons, one must distinguish between non-equivalent but chemically very similar alkane substrate C−H bonds, and for functionalization at the terminus position, one must favor activation of the stronger, primary C−H bonds at the expense of weaker and numerous secondary C-H bonds. The goal of this work is to develop a general principle in organic synthesis for the preparation of a wide variety of more complex molecular architectures from saturated hydrocarbons. In our approach, the alkane will first be transformed into an alkene that will subsequently be engaged in a metal-catalyzed hydrometalation/migration sequence. The first step of the sequence, ideally represented by the removal of two hydrogen atoms, will be performed by the use of a mutated strain of Rhodococcus. The position and geometry of the formed double bond has no effect on the second step of the reaction as the metal-catalyzed hydrometalation/migration will isomerize the double bond along the carbon skeleton to selectively produce the primary organometallic species. Trapping the resulting organometallic derivatives with a large variety of electrophiles will provide the desired functionalized alkane. This work will lead to the invention of new, selective and efficient processes for the utilization of simple hydrocarbons and valorize the synthetic potential of raw hydrocarbon feedstock for the environmentally benign production of new compounds and new materials.
Max ERC Funding
2 499 375 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym BIONICS
Project Bio-Inspired Routes for Controlling the Structure and Properties of Materials: Reusing proven tricks on new materials
Researcher (PI) Boaz Pokroy
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE5, ERC-2013-StG
Summary "In the course of biomineralization, organisms produce a large variety of functional biogenic crystals that exhibit fascinating mechanical, optical, magnetic and other characteristics. More specifically, when living organisms grow crystals they can effectively control polymorph selection as well as the crystal morphology, shape, and even atomic structure. Materials existing in nature have extraordinary and specific functions, yet the materials employed in nature are quite different from those engineers would select.
I propose to emulate specific strategies used by organisms in forming structural biogenic crystals, and to apply these strategies biomimetically so as to form new structural materials with new properties and characteristics. This bio-inspired approach will involve the adoption of three specific biological strategies. We believe that this procedure will open up new ways to control the structure and properties of smart materials.
The three bio-inspired strategies that we will utilize are:
(i) to control the short-range order of amorphous materials, making it possible to predetermine the polymorph obtained when they transform from the amorphous to the succeeding crystalline phase;
(ii) to control the morphology of single crystals of various functional materials so that they can have intricate and curved surfaces and yet maintain their single-crystal nature;
(iii) to entrap organic molecules into single crystals of functional materials so as to tailor and manipulate their electronic structure.
The proposed research has significant potential for opening up new routes for the formation of novel functional materials. Specifically, it will make it possible for us
(1) to produce single, intricately shaped crystals without the need to etch, drill or polish;
(2) to control the short-range order of amorphous materials and hence the polymorph of the successive crystalline phase;
(3) to tune the band gap of semiconductors via incorporation of tailored bio-molecules."
Summary
"In the course of biomineralization, organisms produce a large variety of functional biogenic crystals that exhibit fascinating mechanical, optical, magnetic and other characteristics. More specifically, when living organisms grow crystals they can effectively control polymorph selection as well as the crystal morphology, shape, and even atomic structure. Materials existing in nature have extraordinary and specific functions, yet the materials employed in nature are quite different from those engineers would select.
I propose to emulate specific strategies used by organisms in forming structural biogenic crystals, and to apply these strategies biomimetically so as to form new structural materials with new properties and characteristics. This bio-inspired approach will involve the adoption of three specific biological strategies. We believe that this procedure will open up new ways to control the structure and properties of smart materials.
The three bio-inspired strategies that we will utilize are:
(i) to control the short-range order of amorphous materials, making it possible to predetermine the polymorph obtained when they transform from the amorphous to the succeeding crystalline phase;
(ii) to control the morphology of single crystals of various functional materials so that they can have intricate and curved surfaces and yet maintain their single-crystal nature;
(iii) to entrap organic molecules into single crystals of functional materials so as to tailor and manipulate their electronic structure.
The proposed research has significant potential for opening up new routes for the formation of novel functional materials. Specifically, it will make it possible for us
(1) to produce single, intricately shaped crystals without the need to etch, drill or polish;
(2) to control the short-range order of amorphous materials and hence the polymorph of the successive crystalline phase;
(3) to tune the band gap of semiconductors via incorporation of tailored bio-molecules."
Max ERC Funding
1 500 000 €
Duration
Start date: 2013-09-01, End date: 2018-08-31
Project acronym BIOSELFORGANIZATION
Project Biophysical aspects of self-organization in actin-based cell motility
Researcher (PI) Kinneret Magda Keren
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE3, ERC-2007-StG
Summary Cell motility is a fascinating dynamic process crucial for a wide variety of biological phenomena including defense against injury or infection, embryogenesis and cancer metastasis. A spatially extended, self-organized, mechanochemical machine consisting of numerous actin polymers, accessory proteins and molecular motors drives this process. This impressive assembly self-organizes over several orders of magnitude in both the temporal and spatial domains bridging from the fast dynamics of individual molecular-sized building blocks to the persistent motion of whole cells over minutes and hours. The molecular players involved in the process and the basic biochemical mechanisms are largely known. However, the principles governing the assembly of the motility apparatus, which involve an intricate interplay between biophysical processes and biochemical reactions, are still poorly understood. The proposed research is focused on investigating the biophysical aspects of the self-organization processes underlying cell motility and trying to adapt these processes to instill motility in artificial cells. Important biophysical characteristics of moving cells such as the intracellular fluid flow and membrane tension will be measured and their effect on the motility process will be examined, using fish epithelial keratocytes as a model system. The dynamics of the system will be further investigated by quantitatively analyzing the morphological and kinematic variation displayed by a population of cells and by an individual cell through time. Such measurements will feed into and direct the development of quantitative theoretical models. In parallel, I will work toward the development of a synthetic physical model system for cell motility by encapsulating the actin machinery in a cell-sized compartment. This synthetic system will allow cell motility to be studied in a simplified and controlled environment, detached from the complexity of the living cell.
Summary
Cell motility is a fascinating dynamic process crucial for a wide variety of biological phenomena including defense against injury or infection, embryogenesis and cancer metastasis. A spatially extended, self-organized, mechanochemical machine consisting of numerous actin polymers, accessory proteins and molecular motors drives this process. This impressive assembly self-organizes over several orders of magnitude in both the temporal and spatial domains bridging from the fast dynamics of individual molecular-sized building blocks to the persistent motion of whole cells over minutes and hours. The molecular players involved in the process and the basic biochemical mechanisms are largely known. However, the principles governing the assembly of the motility apparatus, which involve an intricate interplay between biophysical processes and biochemical reactions, are still poorly understood. The proposed research is focused on investigating the biophysical aspects of the self-organization processes underlying cell motility and trying to adapt these processes to instill motility in artificial cells. Important biophysical characteristics of moving cells such as the intracellular fluid flow and membrane tension will be measured and their effect on the motility process will be examined, using fish epithelial keratocytes as a model system. The dynamics of the system will be further investigated by quantitatively analyzing the morphological and kinematic variation displayed by a population of cells and by an individual cell through time. Such measurements will feed into and direct the development of quantitative theoretical models. In parallel, I will work toward the development of a synthetic physical model system for cell motility by encapsulating the actin machinery in a cell-sized compartment. This synthetic system will allow cell motility to be studied in a simplified and controlled environment, detached from the complexity of the living cell.
Max ERC Funding
900 000 €
Duration
Start date: 2008-08-01, End date: 2013-07-31
Project acronym BirNonArchGeom
Project Birational and non-archimedean geometries
Researcher (PI) Michael TEMKIN
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary Resolution of singularities is one of classical, central and difficult areas of algebraic geometry, with a centennial history of intensive research and contributions of such great names as Zariski, Hironaka and Abhyankar. Nowadays, desingularization of schemes of characteristic zero is very well understood, while semistable reduction of morphisms and desingularization in positive characteristic are still waiting for major breakthroughs. In addition to the classical techniques with their triumph in characteristic zero, modern resolution of singularities includes de Jong's method of alterations, toroidal methods, formal analytic and non-archimedean methods, etc.
The aim of the proposed research is to study nearly all directions in resolution of singularities and semistable reduction, as well as the wild ramification phenomena, which are probably the main obstacle to transfer methods from characteristic zero to positive characteristic.
The methods of algebraic and non-archimedean geometries are intertwined in the proposal, though algebraic geometry is somewhat dominating, especially due to the new stack-theoretic techniques. It seems very probable that increasing the symbiosis between birational and non-archimedean geometries will be one of by-products of this research.
Summary
Resolution of singularities is one of classical, central and difficult areas of algebraic geometry, with a centennial history of intensive research and contributions of such great names as Zariski, Hironaka and Abhyankar. Nowadays, desingularization of schemes of characteristic zero is very well understood, while semistable reduction of morphisms and desingularization in positive characteristic are still waiting for major breakthroughs. In addition to the classical techniques with their triumph in characteristic zero, modern resolution of singularities includes de Jong's method of alterations, toroidal methods, formal analytic and non-archimedean methods, etc.
The aim of the proposed research is to study nearly all directions in resolution of singularities and semistable reduction, as well as the wild ramification phenomena, which are probably the main obstacle to transfer methods from characteristic zero to positive characteristic.
The methods of algebraic and non-archimedean geometries are intertwined in the proposal, though algebraic geometry is somewhat dominating, especially due to the new stack-theoretic techniques. It seems very probable that increasing the symbiosis between birational and non-archimedean geometries will be one of by-products of this research.
Max ERC Funding
1 365 600 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym BNYQ
Project Breaking the Nyquist Barrier: A New Paradigm in Data Conversion and Transmission
Researcher (PI) Yonina Eldar
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Consolidator Grant (CoG), PE7, ERC-2014-CoG
Summary Digital signal processing (DSP) is a revolutionary paradigm shift enabling processing of physical data in the digital domain where design and implementation are considerably simplified. However, state-of-the-art analog-to-digital convertors (ADCs) preclude high-rate wideband sampling and processing with low cost and energy consumption, presenting a major bottleneck. This is mostly due to a traditional assumption that sampling must be performed at the Nyquist rate, that is, twice the signal bandwidth. Modern applications including communications, medical imaging, radar and more use signals with high bandwidth, resulting in prohibitively large Nyquist rates.
Our ambitious goal is to introduce a paradigm shift in ADC design that will enable systems capable of low-rate, wideband sensing and low-rate DSP.
While DSP has a rich history in exploiting structure to reduce dimensionality and perform efficient parameter extraction, current ADCs do not exploit such knowledge.
We challenge current practice that separates the sampling stage from the processing stage and exploit structure in analog signals already in the ADC, to drastically reduce the sampling and processing rates.
Our preliminary data shows that this allows substantial savings in sampling and processing rates --- we show rate reduction of 1/28 in ultrasound imaging, and 1/30 in radar detection.
To achieve our overreaching goal we focus on three interconnected objectives -- developing the 1) theory 2) hardware and 3) applications of sub-Nyquist sampling.
Our methodology ties together two areas on the frontier of signal processing: compressed sensing (CS), focused on finite length vectors, and analog sampling. Our research plan also inherently relies on advances in several other important areas within signal processing and combines multi-disciplinary research at the intersection of signal processing, information theory, optimization, estimation theory and hardware design.
Summary
Digital signal processing (DSP) is a revolutionary paradigm shift enabling processing of physical data in the digital domain where design and implementation are considerably simplified. However, state-of-the-art analog-to-digital convertors (ADCs) preclude high-rate wideband sampling and processing with low cost and energy consumption, presenting a major bottleneck. This is mostly due to a traditional assumption that sampling must be performed at the Nyquist rate, that is, twice the signal bandwidth. Modern applications including communications, medical imaging, radar and more use signals with high bandwidth, resulting in prohibitively large Nyquist rates.
Our ambitious goal is to introduce a paradigm shift in ADC design that will enable systems capable of low-rate, wideband sensing and low-rate DSP.
While DSP has a rich history in exploiting structure to reduce dimensionality and perform efficient parameter extraction, current ADCs do not exploit such knowledge.
We challenge current practice that separates the sampling stage from the processing stage and exploit structure in analog signals already in the ADC, to drastically reduce the sampling and processing rates.
Our preliminary data shows that this allows substantial savings in sampling and processing rates --- we show rate reduction of 1/28 in ultrasound imaging, and 1/30 in radar detection.
To achieve our overreaching goal we focus on three interconnected objectives -- developing the 1) theory 2) hardware and 3) applications of sub-Nyquist sampling.
Our methodology ties together two areas on the frontier of signal processing: compressed sensing (CS), focused on finite length vectors, and analog sampling. Our research plan also inherently relies on advances in several other important areas within signal processing and combines multi-disciplinary research at the intersection of signal processing, information theory, optimization, estimation theory and hardware design.
Max ERC Funding
2 400 000 €
Duration
Start date: 2015-08-01, End date: 2020-07-31
Project acronym BOTTOM-UP_SYSCHEM
Project Systems Chemistry from Bottom Up: Switching, Gating and Oscillations in Non Enzymatic Peptide Networks
Researcher (PI) Gonen Ashkenasy
Host Institution (HI) BEN-GURION UNIVERSITY OF THE NEGEV
Call Details Starting Grant (StG), PE5, ERC-2010-StG_20091028
Summary The study of synthetic molecular networks is of fundamental importance for understanding the organizational principles of biological systems and may well be the key to unraveling the origins of life. In addition, such systems may be useful for parallel synthesis of molecules, implementation of catalysis via multi-step pathways, and as media for various applications in nano-medicine and nano-electronics. We have been involved recently in developing peptide-based replicating networks and revealed their dynamic characteristics. We argue here that the structural information embedded in the polypeptide chains is sufficiently rich to allow the construction of peptide 'Systems Chemistry', namely, to facilitate the use of replicating networks as cell-mimetics, featuring complex dynamic behavior. To bring this novel idea to reality, we plan to take a unique holistic approach by studying such networks both experimentally and via simulations, for elucidating basic-principles and towards applications in adjacent fields, such as molecular electronics. Towards realizing these aims, we will study three separate but inter-related objectives: (i) design and characterization of networks that react and rewire in response to external triggers, such as light, (ii) design of networks that operate via new dynamic rules of product formation that lead to oscillations, and (iii) exploitation of the molecular information gathered from the networks as means to control switching and gating in molecular electronic devices. We believe that achieving the project's objectives will be highly significant for the development of the arising field of Systems Chemistry, and in addition will provide valuable tools for studying related scientific fields, such as systems biology and molecular electronics.
Summary
The study of synthetic molecular networks is of fundamental importance for understanding the organizational principles of biological systems and may well be the key to unraveling the origins of life. In addition, such systems may be useful for parallel synthesis of molecules, implementation of catalysis via multi-step pathways, and as media for various applications in nano-medicine and nano-electronics. We have been involved recently in developing peptide-based replicating networks and revealed their dynamic characteristics. We argue here that the structural information embedded in the polypeptide chains is sufficiently rich to allow the construction of peptide 'Systems Chemistry', namely, to facilitate the use of replicating networks as cell-mimetics, featuring complex dynamic behavior. To bring this novel idea to reality, we plan to take a unique holistic approach by studying such networks both experimentally and via simulations, for elucidating basic-principles and towards applications in adjacent fields, such as molecular electronics. Towards realizing these aims, we will study three separate but inter-related objectives: (i) design and characterization of networks that react and rewire in response to external triggers, such as light, (ii) design of networks that operate via new dynamic rules of product formation that lead to oscillations, and (iii) exploitation of the molecular information gathered from the networks as means to control switching and gating in molecular electronic devices. We believe that achieving the project's objectives will be highly significant for the development of the arising field of Systems Chemistry, and in addition will provide valuable tools for studying related scientific fields, such as systems biology and molecular electronics.
Max ERC Funding
1 500 000 €
Duration
Start date: 2010-10-01, End date: 2015-09-30
Project acronym CAC
Project Cryptography and Complexity
Researcher (PI) Yuval Ishai
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary Modern cryptography has deeply rooted connections with computational complexity theory and other areas of computer science. This proposal suggests to explore several {\em new connections} between questions in cryptography and questions from other domains, including computational complexity, coding theory, and even the natural sciences. The project is expected to broaden the impact of ideas from cryptography on other domains, and on the other hand to benefit cryptography by applying tools from other domains towards better solutions for central problems in cryptography.
Summary
Modern cryptography has deeply rooted connections with computational complexity theory and other areas of computer science. This proposal suggests to explore several {\em new connections} between questions in cryptography and questions from other domains, including computational complexity, coding theory, and even the natural sciences. The project is expected to broaden the impact of ideas from cryptography on other domains, and on the other hand to benefit cryptography by applying tools from other domains towards better solutions for central problems in cryptography.
Max ERC Funding
1 459 703 €
Duration
Start date: 2010-12-01, End date: 2015-11-30
Project acronym CAP
Project Computers Arguing with People
Researcher (PI) Sarit Kraus
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Advanced Grant (AdG), PE6, ERC-2010-AdG_20100224
Summary An important form of negotiation is argumentation. This is the ability to argue and to persuade the other party to accept a desired agreement, to acquire or give information, to coordinate goals and actions, and to find and verify evidence. This is a key capability in negotiating with humans.
While automated negotiations between software agents can often exchange offers and counteroffers, humans require persuasion. This challenges the design of agents arguing with people, with the objective that the outcome of the negotiation will meet the preferences of the arguer agent.
CAP’s objective is to enable automated agents to argue and persuade humans.
To achieve this, we intend to develop the following key components:
1) The extension of current game theory models of persuasion and bargaining to more realistic settings, 2) Algorithms and heuristics for generation and evaluation of arguments during negotiation with people, 3) Algorithms and heuristics for managing inconsistent views of the negotiation environment, and decision procedures for revelation, signalling, and requesting information, 4) The revision and update of the agent’s mental state and incorporation of social context, 5) Identifying strategies for expressing emotions in negotiations, 6) Technology for general opponent modelling from sparse and noisy data.
To demonstrate the developed methods, we will implement two training systems for people to improve their interviewing capabilities, and for training negotiators in inter-culture negotiations.
CAP will revolutionise the state of the art of automated systems negotiating with people. It will also create breakthroughs in the research of multi-agent systems in general, and will change paradigms by providing new directions for the way computers interact with people.
Summary
An important form of negotiation is argumentation. This is the ability to argue and to persuade the other party to accept a desired agreement, to acquire or give information, to coordinate goals and actions, and to find and verify evidence. This is a key capability in negotiating with humans.
While automated negotiations between software agents can often exchange offers and counteroffers, humans require persuasion. This challenges the design of agents arguing with people, with the objective that the outcome of the negotiation will meet the preferences of the arguer agent.
CAP’s objective is to enable automated agents to argue and persuade humans.
To achieve this, we intend to develop the following key components:
1) The extension of current game theory models of persuasion and bargaining to more realistic settings, 2) Algorithms and heuristics for generation and evaluation of arguments during negotiation with people, 3) Algorithms and heuristics for managing inconsistent views of the negotiation environment, and decision procedures for revelation, signalling, and requesting information, 4) The revision and update of the agent’s mental state and incorporation of social context, 5) Identifying strategies for expressing emotions in negotiations, 6) Technology for general opponent modelling from sparse and noisy data.
To demonstrate the developed methods, we will implement two training systems for people to improve their interviewing capabilities, and for training negotiators in inter-culture negotiations.
CAP will revolutionise the state of the art of automated systems negotiating with people. It will also create breakthroughs in the research of multi-agent systems in general, and will change paradigms by providing new directions for the way computers interact with people.
Max ERC Funding
2 334 057 €
Duration
Start date: 2011-07-01, End date: 2016-06-30
Project acronym CAPRI
Project Clouds and Precipitation Response to Anthropogenic Changes in the Natural Environment
Researcher (PI) Ilan Koren
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE10, ERC-2012-StG_20111012
Summary Clouds and precipitation play a crucial role in the Earth's energy balance, global atmospheric circulation and the water cycle. Despite their importance, clouds still pose the largest uncertainty in climate research.
I propose a new approach for studying anthropogenic effects on cloud fields and rain, tackling the challenge from both scientific ends: reductionism and systems approach. We will develop a novel research approach using observations and models interactively that will allow us to “peel apart” detailed physical processes. In parallel we will develop a systems view of cloud fields looking for Emergent Behavior rising out of the complexity, as the end result of all of the coupled processes. Better understanding of key processes on a detailed (reductionist) manner will enable us to formulate the important basic rules that control the field and to look for emergence of the overall effects.
We will merge ideas and methods from four different disciplines: remote sensing and radiative transfer, cloud physics, pattern recognition and computer vision and ideas developed in systems approach. All of this will be done against the backdrop of natural variability of meteorological systems.
The outcomes of this work will include fundamental new understanding of the coupled surface-aerosol-cloud-precipitation system. More importantly this work will emphasize the consequences of human actions on the environment, and how we change our climate and hydrological cycle as we input pollutants and transform the Earth’s surface. This work will open new horizons in cloud research by developing novel methods and employing the bulk knowledge of pattern recognition, complexity, networking and self organization to cloud and climate studies. We are proposing a long-term, open-ended program of study that will have scientific and societal relevance as long as human-caused influences continue, evolve and change.
Summary
Clouds and precipitation play a crucial role in the Earth's energy balance, global atmospheric circulation and the water cycle. Despite their importance, clouds still pose the largest uncertainty in climate research.
I propose a new approach for studying anthropogenic effects on cloud fields and rain, tackling the challenge from both scientific ends: reductionism and systems approach. We will develop a novel research approach using observations and models interactively that will allow us to “peel apart” detailed physical processes. In parallel we will develop a systems view of cloud fields looking for Emergent Behavior rising out of the complexity, as the end result of all of the coupled processes. Better understanding of key processes on a detailed (reductionist) manner will enable us to formulate the important basic rules that control the field and to look for emergence of the overall effects.
We will merge ideas and methods from four different disciplines: remote sensing and radiative transfer, cloud physics, pattern recognition and computer vision and ideas developed in systems approach. All of this will be done against the backdrop of natural variability of meteorological systems.
The outcomes of this work will include fundamental new understanding of the coupled surface-aerosol-cloud-precipitation system. More importantly this work will emphasize the consequences of human actions on the environment, and how we change our climate and hydrological cycle as we input pollutants and transform the Earth’s surface. This work will open new horizons in cloud research by developing novel methods and employing the bulk knowledge of pattern recognition, complexity, networking and self organization to cloud and climate studies. We are proposing a long-term, open-ended program of study that will have scientific and societal relevance as long as human-caused influences continue, evolve and change.
Max ERC Funding
1 428 169 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym CartiLube
Project Lubricating Cartilage: exploring the relation between lubrication and gene-regulation to alleviate osteoarthritis
Researcher (PI) Jacob KLEIN
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE4, ERC-2016-ADG
Summary Can we exploit insights from the remarkably lubricated surfaces of articular cartilage, to create lubricants that may alleviate osteoarthritis (OA), the most widespread joint disease, affecting millions? These, succinctly, are the challenges of the present proposal. They are driven by our recent finding that lubrication of destabilised joints leads to changes in gene-regulation of the cartilage-embedded chondrocytes to protect against development of the disease. OA alleviation is known to arise through orthopedically suppressing shear-stresses on the cartilage, and a central premise of this project is that, by reducing friction at the articulating cartilage through suitable lubrication, we may achieve the same beneficial effect on the disease. The objectives of this project are to better understand the origins of cartilage boundary lubrication through examination of friction-reduction by its main molecular components, and exploit that understanding to create lubricants that, on intra-articular injection, will lubricate cartilage sufficiently well to achieve alleviation of OA via gene regulation. The project will examine, via both nanotribometric and macroscopic measurements, how the main molecular species implicated in cartilage lubrication, lipids, hyaluronan and lubricin, and their combinations, act together to form optimally lubricating boundary layers on model surfaces as well as on excised cartilage. Based on this, we shall develop suitable materials to lubricate cartilage in joints, using mouse models. Lubricants will further be optimized with respect to their retention in the joint and cartilage targeting, both in model studies and in vivo. The effect of the lubricants in regulating gene expression, in reducing pain and cartilage degradation, and in promoting stem-cell adhesion to the cartilage will be studied in a mouse model in which OA has been induced. Our results will have implications for treatment of a common, debilitating disease.
Summary
Can we exploit insights from the remarkably lubricated surfaces of articular cartilage, to create lubricants that may alleviate osteoarthritis (OA), the most widespread joint disease, affecting millions? These, succinctly, are the challenges of the present proposal. They are driven by our recent finding that lubrication of destabilised joints leads to changes in gene-regulation of the cartilage-embedded chondrocytes to protect against development of the disease. OA alleviation is known to arise through orthopedically suppressing shear-stresses on the cartilage, and a central premise of this project is that, by reducing friction at the articulating cartilage through suitable lubrication, we may achieve the same beneficial effect on the disease. The objectives of this project are to better understand the origins of cartilage boundary lubrication through examination of friction-reduction by its main molecular components, and exploit that understanding to create lubricants that, on intra-articular injection, will lubricate cartilage sufficiently well to achieve alleviation of OA via gene regulation. The project will examine, via both nanotribometric and macroscopic measurements, how the main molecular species implicated in cartilage lubrication, lipids, hyaluronan and lubricin, and their combinations, act together to form optimally lubricating boundary layers on model surfaces as well as on excised cartilage. Based on this, we shall develop suitable materials to lubricate cartilage in joints, using mouse models. Lubricants will further be optimized with respect to their retention in the joint and cartilage targeting, both in model studies and in vivo. The effect of the lubricants in regulating gene expression, in reducing pain and cartilage degradation, and in promoting stem-cell adhesion to the cartilage will be studied in a mouse model in which OA has been induced. Our results will have implications for treatment of a common, debilitating disease.
Max ERC Funding
2 499 944 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym CASe
Project Combinatorics with an analytic structure
Researcher (PI) Karim ADIPRASITO
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE1, ERC-2016-STG
Summary "Combinatorics, and its interplay with geometry, has fascinated our ancestors as shown by early stone carvings in the Neolithic period. Modern combinatorics is motivated by the ubiquity of its structures in both pure and applied mathematics.
The work of Hochster and Stanley, who realized the relation of enumerative questions to commutative algebra and toric geometry made a vital contribution to the development of this subject. Their work was a central contribution to the classification of face numbers of simple polytopes, and the initial success lead to a wealth of research in which combinatorial problems were translated to algebra and geometry and then solved using deep results such as Saito's hard Lefschetz theorem. As a caveat, this also made branches of combinatorics reliant on algebra and geometry to provide new ideas.
In this proposal, I want to reverse this approach and extend our understanding of geometry and algebra guided by combinatorial methods. In this spirit I propose new combinatorial approaches to the interplay of curvature and topology, to isoperimetry, geometric analysis, and intersection theory, to name a few. In addition, while these subjects are interesting by themselves, they are also designed to advance classical topics, for example, the diameter of polyhedra (as in the Hirsch conjecture), arrangement theory (and the study of arrangement complements), Hodge theory (as in Grothendieck's standard conjectures), and realization problems of discrete objects (as in Connes embedding problem for type II factors).
This proposal is supported by the review of some already developed tools, such as relative Stanley--Reisner theory (which is equipped to deal with combinatorial isoperimetries), combinatorial Hodge theory (which extends the ``K\""ahler package'' to purely combinatorial settings), and discrete PDEs (which were used to construct counterexamples to old problems in discrete geometry)."
Summary
"Combinatorics, and its interplay with geometry, has fascinated our ancestors as shown by early stone carvings in the Neolithic period. Modern combinatorics is motivated by the ubiquity of its structures in both pure and applied mathematics.
The work of Hochster and Stanley, who realized the relation of enumerative questions to commutative algebra and toric geometry made a vital contribution to the development of this subject. Their work was a central contribution to the classification of face numbers of simple polytopes, and the initial success lead to a wealth of research in which combinatorial problems were translated to algebra and geometry and then solved using deep results such as Saito's hard Lefschetz theorem. As a caveat, this also made branches of combinatorics reliant on algebra and geometry to provide new ideas.
In this proposal, I want to reverse this approach and extend our understanding of geometry and algebra guided by combinatorial methods. In this spirit I propose new combinatorial approaches to the interplay of curvature and topology, to isoperimetry, geometric analysis, and intersection theory, to name a few. In addition, while these subjects are interesting by themselves, they are also designed to advance classical topics, for example, the diameter of polyhedra (as in the Hirsch conjecture), arrangement theory (and the study of arrangement complements), Hodge theory (as in Grothendieck's standard conjectures), and realization problems of discrete objects (as in Connes embedding problem for type II factors).
This proposal is supported by the review of some already developed tools, such as relative Stanley--Reisner theory (which is equipped to deal with combinatorial isoperimetries), combinatorial Hodge theory (which extends the ``K\""ahler package'' to purely combinatorial settings), and discrete PDEs (which were used to construct counterexamples to old problems in discrete geometry)."
Max ERC Funding
1 337 200 €
Duration
Start date: 2016-12-01, End date: 2021-11-30
Project acronym CerQuS
Project Certified Quantum Security
Researcher (PI) Dominique Peer Ghislain UNRUH
Host Institution (HI) TARTU ULIKOOL
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary "Digital communication permeates all areas of today's daily life. Cryptographic protocols are used to secure that
communication. Quantum communication and the advent of quantum computers both threaten existing cryptographic
solutions, and create new opportunities for secure protocols. The security of cryptographic systems is normally ensured by
mathematical proofs. Due to human error, however, these proofs often contain errors, limiting the usefulness of said proofs.
This is especially true in the case of quantum protocols since human intuition is well-adapted to the classical world, but not
to quantum mechanics. To resolve this problem, methods for verifying cryptographic security proofs using computers (i.e.,
for ""certifying"" the security) have been developed. Yet, all existing verification approaches handle classical cryptography
only - for quantum protocols, no approaches exist.
This project will lay the foundations for the verification of quantum cryptography. We will design logics and software tools
for developing and verifying security proofs on the computer, both for classical protocols secure against quantum computer
(post-quantum security) and for protocols that use quantum communication.
Our main approach is the design of a logic (quantum relational Hoare logic, qRHL) for reasoning about the relationship
between pairs of quantum programs, together with an ecosystem of manual and automated reasoning tools, culminating in
fully certified security proofs for real-world quantum protocols.
As a final result, the project will improve the security of protocols in the quantum age, by removing one possible source of
human error. In addition, the project directly impacts the research community, by providing new foundations in program
verification, and by providing cryptographers with new tools for the verification of their protocols.
"
Summary
"Digital communication permeates all areas of today's daily life. Cryptographic protocols are used to secure that
communication. Quantum communication and the advent of quantum computers both threaten existing cryptographic
solutions, and create new opportunities for secure protocols. The security of cryptographic systems is normally ensured by
mathematical proofs. Due to human error, however, these proofs often contain errors, limiting the usefulness of said proofs.
This is especially true in the case of quantum protocols since human intuition is well-adapted to the classical world, but not
to quantum mechanics. To resolve this problem, methods for verifying cryptographic security proofs using computers (i.e.,
for ""certifying"" the security) have been developed. Yet, all existing verification approaches handle classical cryptography
only - for quantum protocols, no approaches exist.
This project will lay the foundations for the verification of quantum cryptography. We will design logics and software tools
for developing and verifying security proofs on the computer, both for classical protocols secure against quantum computer
(post-quantum security) and for protocols that use quantum communication.
Our main approach is the design of a logic (quantum relational Hoare logic, qRHL) for reasoning about the relationship
between pairs of quantum programs, together with an ecosystem of manual and automated reasoning tools, culminating in
fully certified security proofs for real-world quantum protocols.
As a final result, the project will improve the security of protocols in the quantum age, by removing one possible source of
human error. In addition, the project directly impacts the research community, by providing new foundations in program
verification, and by providing cryptographers with new tools for the verification of their protocols.
"
Max ERC Funding
1 716 475 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym CIRCOMMUNICATION
Project Deciphering molecular pathways of circadian clock communication
Researcher (PI) gad ASHER
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), LS1, ERC-2017-COG
Summary The overarching objective of this interdisciplinary project is to elucidate mechanisms through which billions of individual clocks in the body communicate with each other and tick in harmony. The mammalian circadian timing system consists of a master clock in the brain and subsidiary oscillators in almost every cell of the body. Since these clocks anticipate environmental changes and function together to orchestrate daily physiology and behavior their temporal synchronization is critical.
Our recent finding that oxygen serves as a resetting cue for circadian clocks points towards the unprecedented involvement of blood gases as time signals. We will apply cutting edge continuous physiological measurements in freely moving animals, alongside biochemical/molecular biology approaches and advanced cell culture setup to determine the molecular role of oxygen, carbon dioxide and pH in circadian clock communication and function.
The intricate nature of the mammalian circadian system demands the presence of communication mechanisms between clocks throughout the body at multiple levels. While previous studies primarily addressed the role of the master clock in resetting peripheral clocks, our knowledge regarding the communication among clocks between and within peripheral organs is rudimentary. We will reconstruct the mammalian circadian system from the bottom up, sequentially restoring clocks in peripheral tissues of a non-rhythmic animal to (i) obtain a system-view of the peripheral circadian communication network; and (ii) study novel tissue-derived circadian communication mechanisms.
This integrative proposal addresses fundamental aspects of circadian biology. It is expected to unravel the circadian communication network and shed light on how billions of clocks in the body function in unison. Its impact extends beyond circadian rhythms and bears great potential for research on communication between cells/tissues in various fields of biology.
Summary
The overarching objective of this interdisciplinary project is to elucidate mechanisms through which billions of individual clocks in the body communicate with each other and tick in harmony. The mammalian circadian timing system consists of a master clock in the brain and subsidiary oscillators in almost every cell of the body. Since these clocks anticipate environmental changes and function together to orchestrate daily physiology and behavior their temporal synchronization is critical.
Our recent finding that oxygen serves as a resetting cue for circadian clocks points towards the unprecedented involvement of blood gases as time signals. We will apply cutting edge continuous physiological measurements in freely moving animals, alongside biochemical/molecular biology approaches and advanced cell culture setup to determine the molecular role of oxygen, carbon dioxide and pH in circadian clock communication and function.
The intricate nature of the mammalian circadian system demands the presence of communication mechanisms between clocks throughout the body at multiple levels. While previous studies primarily addressed the role of the master clock in resetting peripheral clocks, our knowledge regarding the communication among clocks between and within peripheral organs is rudimentary. We will reconstruct the mammalian circadian system from the bottom up, sequentially restoring clocks in peripheral tissues of a non-rhythmic animal to (i) obtain a system-view of the peripheral circadian communication network; and (ii) study novel tissue-derived circadian communication mechanisms.
This integrative proposal addresses fundamental aspects of circadian biology. It is expected to unravel the circadian communication network and shed light on how billions of clocks in the body function in unison. Its impact extends beyond circadian rhythms and bears great potential for research on communication between cells/tissues in various fields of biology.
Max ERC Funding
1 999 945 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym CISS
Project Chiral Induced Spin Selectivity
Researcher (PI) Ron Naaman
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE4, ERC-2013-ADG
Summary The overall objective is to fully understand the Chiral Induced Spin Selectivity (CISS) effect, which was discovered recently. It was found that the transmission or conduction of electrons through chiral molecules is spin dependent. The CISS effect is a change in the pradigm that assumed that any spin manipulation requiers magnetic materials or materials with high spin-orbit coupling. These unexpected new findings open new possibilities for applying chiral molecules in spintronics applications and may provide new insights on electron transfer processes in Biology.
The specific goals of the proposed research are
(i) To establish the parameters that affect the magnitude of the CISS effect.
(ii) To demonstrate spintronics devices (memory and transistors) that are based on the CISS effect.
(iii) To investigate the role of CISS in electron transfer in biology related systems.
The experiments will be performed applying a combination of experimental methods including photoelectron spectroscopy, single molecule conduction, light-induced electron transfer, and spin specific conduction through magneto-electric devices.
The project has a potential to have very large impact on various fields from Physics to Biology. It will result in the establishment of chiral organic molecules as a new substrate for wide range of spintronics related applications including magnetic memory, and in determining whether spins play a role in electron transfer processes in biology.
Summary
The overall objective is to fully understand the Chiral Induced Spin Selectivity (CISS) effect, which was discovered recently. It was found that the transmission or conduction of electrons through chiral molecules is spin dependent. The CISS effect is a change in the pradigm that assumed that any spin manipulation requiers magnetic materials or materials with high spin-orbit coupling. These unexpected new findings open new possibilities for applying chiral molecules in spintronics applications and may provide new insights on electron transfer processes in Biology.
The specific goals of the proposed research are
(i) To establish the parameters that affect the magnitude of the CISS effect.
(ii) To demonstrate spintronics devices (memory and transistors) that are based on the CISS effect.
(iii) To investigate the role of CISS in electron transfer in biology related systems.
The experiments will be performed applying a combination of experimental methods including photoelectron spectroscopy, single molecule conduction, light-induced electron transfer, and spin specific conduction through magneto-electric devices.
The project has a potential to have very large impact on various fields from Physics to Biology. It will result in the establishment of chiral organic molecules as a new substrate for wide range of spintronics related applications including magnetic memory, and in determining whether spins play a role in electron transfer processes in biology.
Max ERC Funding
2 499 998 €
Duration
Start date: 2013-10-01, End date: 2018-09-30
Project acronym CLC
Project Cryptography with Low Complexity
Researcher (PI) Benny Applebaum
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary The efficiency of cryptographic constructions is a fundamental question. Theoretically, it is important to understand how much computational resources are needed to guarantee strong notions of security. Practically, highly efficient schemes are always desirable for real-world applications. More generally, the possibility of cryptography with low complexity has wide applications for problems in computational complexity, combinatorial optimization, and computational learning theory.
In this proposal we aim to understand what are the minimal computational resources needed to perform basic cryptographic tasks. In a nutshell, we suggest to focus on three main objectives. First, we would like to get better understanding of the cryptographic hardness of random local functions. Such functions can be computed by highly-efficient circuits and their cryptographic hardness provides a strong and clean formulation for the conjectured average-case hardness of constraint satisfaction problems - a fundamental subject which lies at the core of the theory of computer science. Our second objective is to harness our insights into the hardness of local functions to improve the efficiency of basic cryptographic building blocks such as pseudorandom functions. Finally, our third objective is to expand our theoretical understanding of garbled circuits, study their limitations, and improve their efficiency.
The suggested project can bridge across different regions of computer science such as random combinatorial structures, cryptography, and circuit complexity. It is expected to impact central problems in cryptography, while enriching the general landscape of theoretical computer science.
Summary
The efficiency of cryptographic constructions is a fundamental question. Theoretically, it is important to understand how much computational resources are needed to guarantee strong notions of security. Practically, highly efficient schemes are always desirable for real-world applications. More generally, the possibility of cryptography with low complexity has wide applications for problems in computational complexity, combinatorial optimization, and computational learning theory.
In this proposal we aim to understand what are the minimal computational resources needed to perform basic cryptographic tasks. In a nutshell, we suggest to focus on three main objectives. First, we would like to get better understanding of the cryptographic hardness of random local functions. Such functions can be computed by highly-efficient circuits and their cryptographic hardness provides a strong and clean formulation for the conjectured average-case hardness of constraint satisfaction problems - a fundamental subject which lies at the core of the theory of computer science. Our second objective is to harness our insights into the hardness of local functions to improve the efficiency of basic cryptographic building blocks such as pseudorandom functions. Finally, our third objective is to expand our theoretical understanding of garbled circuits, study their limitations, and improve their efficiency.
The suggested project can bridge across different regions of computer science such as random combinatorial structures, cryptography, and circuit complexity. It is expected to impact central problems in cryptography, while enriching the general landscape of theoretical computer science.
Max ERC Funding
1 265 750 €
Duration
Start date: 2015-05-01, End date: 2021-04-30
Project acronym CloudRadioNet
Project Cloud Wireless Networks: An Information Theoretic Framework
Researcher (PI) Shlomo Shamai Shitz
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Advanced Grant (AdG), PE7, ERC-2015-AdG
Summary This five years research proposal is focused on the development of novel information theoretic concepts and techniques and their usage, as to identify the ultimate communications limits and potential of different cloud radio network structures, in which the central signal processing is migrated to the cloud (remote central units), via fronthaul/backhaul infrastructure links. Moreover, it is also directed to introduce and study the optimal or close to optimal strategies for those systems that are to be motivated by the developed theory. We plan to address wireless networks, having future cellular technology in mind, but the basic tools and approaches to be built and researched are relevant to other communication networks as well. Cloud communication networks motivate novel information theoretic views, and perspectives that put backhaul/fronthaul connections in the center, thus deviating considerably from standard theoretical studies of communications links and networks, which are applied to this domain. Our approach accounts for the fact that in such networks information theoretic separation concepts are no longer optimal, hence isolating simple basic components of the network is essentially suboptimal. The proposed view incorporates, in a unified way, under the general cover of information theory: Multi-terminal distributed networks; Basic and timely concepts of distributed coding and communications; Network communications and primarily network coding, Index coding, as associated with interference alignment and caching; Information-Estimation relations and signal processing, addressing the impact of distributed channel state information directly; A variety of fundamental concepts in optimization and random matrix theories. This path provides a natural theoretical framework directed towards better understanding the potential and limitation of cloud networks on one hand and paves the way to innovative communications design principles on the other.
Summary
This five years research proposal is focused on the development of novel information theoretic concepts and techniques and their usage, as to identify the ultimate communications limits and potential of different cloud radio network structures, in which the central signal processing is migrated to the cloud (remote central units), via fronthaul/backhaul infrastructure links. Moreover, it is also directed to introduce and study the optimal or close to optimal strategies for those systems that are to be motivated by the developed theory. We plan to address wireless networks, having future cellular technology in mind, but the basic tools and approaches to be built and researched are relevant to other communication networks as well. Cloud communication networks motivate novel information theoretic views, and perspectives that put backhaul/fronthaul connections in the center, thus deviating considerably from standard theoretical studies of communications links and networks, which are applied to this domain. Our approach accounts for the fact that in such networks information theoretic separation concepts are no longer optimal, hence isolating simple basic components of the network is essentially suboptimal. The proposed view incorporates, in a unified way, under the general cover of information theory: Multi-terminal distributed networks; Basic and timely concepts of distributed coding and communications; Network communications and primarily network coding, Index coding, as associated with interference alignment and caching; Information-Estimation relations and signal processing, addressing the impact of distributed channel state information directly; A variety of fundamental concepts in optimization and random matrix theories. This path provides a natural theoretical framework directed towards better understanding the potential and limitation of cloud networks on one hand and paves the way to innovative communications design principles on the other.
Max ERC Funding
1 981 782 €
Duration
Start date: 2016-07-01, End date: 2021-06-30
Project acronym CMetC
Project Selective Carbon-Carbon Bond Activation: A Wellspring of Untapped Reactivity
Researcher (PI) Ilan Marek
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Advanced Grant (AdG), PE5, ERC-2013-ADG
Summary The creation of new molecular entities and subsequent exploitation of their properties is central to a broad spectrum of research disciplines from medicine to materials. Most –if not all- of the efforts of organic chemists were directed to the development of creative strategies to built carbon-carbon and carbon-heteroatom bonds in a predictable and efficient manner. But is the creation of new bonds the only approach that organic chemistry should follow? Could we design the synthesis of challenging molecular skeleton no more through the construction of carbon-carbon bonds but rather through selective cleavage of carbon-carbon bonds (C-C bond activation)? The goal of this work is to develop powerful synthetic approaches for the selective C-C bond activation and demonstrate that it has the potential to be a general principle in organic synthesis for the regio-, diastereo- and even enantiomerically enriched preparation of adducts despite that C-C single bonds belong among the least reactive functional groups in chemistry. The realization of this synthetic potential requires the ability to functionalize selectively one C-C bond in compounds containing many such bonds and an array of functional groups. This site selective C-C bond activation is one of the greatest challenges that must be met to be used widely in complex-molecular synthesis. To emphasize the practicality of C-C bond activation, we will prepare in a single-pot operation challenging molecular framework possessing various stereogenic centers from very simple starting materials through selective C-C bond activation. Ideally, alkenes will be in-situ transformed into alkanes that will subsequently undergo the C-C activation even in the presence of functional group. This work will lead to ground-breaking advances when non-strained cycloalkanes (cyclopentane, cyclohexane) will undergo this smooth C-C bond activation with friendly and non toxic organometallic species.
Summary
The creation of new molecular entities and subsequent exploitation of their properties is central to a broad spectrum of research disciplines from medicine to materials. Most –if not all- of the efforts of organic chemists were directed to the development of creative strategies to built carbon-carbon and carbon-heteroatom bonds in a predictable and efficient manner. But is the creation of new bonds the only approach that organic chemistry should follow? Could we design the synthesis of challenging molecular skeleton no more through the construction of carbon-carbon bonds but rather through selective cleavage of carbon-carbon bonds (C-C bond activation)? The goal of this work is to develop powerful synthetic approaches for the selective C-C bond activation and demonstrate that it has the potential to be a general principle in organic synthesis for the regio-, diastereo- and even enantiomerically enriched preparation of adducts despite that C-C single bonds belong among the least reactive functional groups in chemistry. The realization of this synthetic potential requires the ability to functionalize selectively one C-C bond in compounds containing many such bonds and an array of functional groups. This site selective C-C bond activation is one of the greatest challenges that must be met to be used widely in complex-molecular synthesis. To emphasize the practicality of C-C bond activation, we will prepare in a single-pot operation challenging molecular framework possessing various stereogenic centers from very simple starting materials through selective C-C bond activation. Ideally, alkenes will be in-situ transformed into alkanes that will subsequently undergo the C-C activation even in the presence of functional group. This work will lead to ground-breaking advances when non-strained cycloalkanes (cyclopentane, cyclohexane) will undergo this smooth C-C bond activation with friendly and non toxic organometallic species.
Max ERC Funding
2 367 495 €
Duration
Start date: 2013-11-01, End date: 2018-10-31
Project acronym COHOMCODES
Project Robust Codes from Higher Dimesional Expanders
Researcher (PI) Tali Kaufman Halman
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2013-StG
Summary Error correcting codes play a fundamental role in computer science. Good codes are codes with rate and distance that are asymptotically optimal. Some of the most successful good codes are constructed using expander graphs. In recent years a new notion of {\em robust} error correcting codes, known as locally testable codes (LTCs), has emerged. Locally testable codes are codes in which a proximity of a vector to an error correcting code can be achieved by probing the vector in {\em constant} many locations (independent of its length). LTCs are at the heart of Probabilistically Checkable Proofs (PCPs) and their construction has been sought since the discovery of the PCP theorem in the early 1990s.
Despite 20 years of research, it is still widely open whether good locally testable codes exist. LTCs present completely new challenge to the field of error correcting codes. In the old paradigm a random code is a good code and the main focus was to construct explicit codes that imitate the random code. However, a random code is not an LTC. Thus, contrary to traditional codes, there are no natural candidates for LTCs. The known constructions of robust codes are ad hoc, and there is a lack of theory that explains their existence.
The goal of the current research plan is to harness the emerging field of higher dimensional expanders and their topological properties for a systematic study of robust error correcting codes. Higher dimensional expanders are natural candidates for obtaining robust codes since they offer a strong form of redundancy that is essential for robustness. Such form of redundancy is lacking by their one dimensional analogue (i.e., expander graphs). Hence, the known expander codes are not robust. We expect that our study will draw new connections between error correcting codes, high dimensional expanders, topology and probability that will shed new light on these fields, and in particular, will advance the constructing of good and robust codes.
Summary
Error correcting codes play a fundamental role in computer science. Good codes are codes with rate and distance that are asymptotically optimal. Some of the most successful good codes are constructed using expander graphs. In recent years a new notion of {\em robust} error correcting codes, known as locally testable codes (LTCs), has emerged. Locally testable codes are codes in which a proximity of a vector to an error correcting code can be achieved by probing the vector in {\em constant} many locations (independent of its length). LTCs are at the heart of Probabilistically Checkable Proofs (PCPs) and their construction has been sought since the discovery of the PCP theorem in the early 1990s.
Despite 20 years of research, it is still widely open whether good locally testable codes exist. LTCs present completely new challenge to the field of error correcting codes. In the old paradigm a random code is a good code and the main focus was to construct explicit codes that imitate the random code. However, a random code is not an LTC. Thus, contrary to traditional codes, there are no natural candidates for LTCs. The known constructions of robust codes are ad hoc, and there is a lack of theory that explains their existence.
The goal of the current research plan is to harness the emerging field of higher dimensional expanders and their topological properties for a systematic study of robust error correcting codes. Higher dimensional expanders are natural candidates for obtaining robust codes since they offer a strong form of redundancy that is essential for robustness. Such form of redundancy is lacking by their one dimensional analogue (i.e., expander graphs). Hence, the known expander codes are not robust. We expect that our study will draw new connections between error correcting codes, high dimensional expanders, topology and probability that will shed new light on these fields, and in particular, will advance the constructing of good and robust codes.
Max ERC Funding
1 302 000 €
Duration
Start date: 2014-02-01, End date: 2020-01-31
Project acronym ColloQuantO
Project Colloidal Quantum Dot Quantum Optics
Researcher (PI) Dan Oron
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE LTD
Call Details Consolidator Grant (CoG), PE4, ERC-2015-CoG
Summary Colloidal semiconductor nanocrystals have already found significant use in various arenas, including bioimaging, displays, lighting, photovoltaics and catalysis. Here we aim to harness the extremely broad synthetic toolbox of colloidal semiconductor quantum dots in order to utilize them as unique sources of quantum states of light, extending well beyond the present attempts to use them as single photon sources. By tailoring the shape, size, composition and the organic ligand layer of quantum dots, rods and platelets, we propose their use as sources exhibiting a deterministic number of emitted photons upon saturated excitation and as tunable sources of correlated and entangled photon pairs. The versatility afforded in their fabrication by colloidal synthesis, rather than by epitaxial growth, presents a potential pathway to overcome some of the significant limitations of present-day solid state sources of nonclassical light, including color tunability, fidelity and ease of assembly into devices.
This program is a concerted effort both on colloidal synthesis of complex multicomponent semiconductor nanocrystals and on cutting edge photophysical studies at the single nanocrystal level. This should enable new types of emitters of nonclassical light, as well as provide a platform for the implementation of recently suggested schemes in quantum optics which have never been experimentally demonstrated. These include room temperature sources of exactly two (or more) photons, correlated photon pairs from quantum dot molecules and entanglement based on time reordering. Fulfilling the optical and material requirements from this type of system, including photostability, control of carrier-carrier interactions, and a large quantum yield, will inevitably reveal some of the fundamental properties of coupled carriers in strongly confined structures.
Summary
Colloidal semiconductor nanocrystals have already found significant use in various arenas, including bioimaging, displays, lighting, photovoltaics and catalysis. Here we aim to harness the extremely broad synthetic toolbox of colloidal semiconductor quantum dots in order to utilize them as unique sources of quantum states of light, extending well beyond the present attempts to use them as single photon sources. By tailoring the shape, size, composition and the organic ligand layer of quantum dots, rods and platelets, we propose their use as sources exhibiting a deterministic number of emitted photons upon saturated excitation and as tunable sources of correlated and entangled photon pairs. The versatility afforded in their fabrication by colloidal synthesis, rather than by epitaxial growth, presents a potential pathway to overcome some of the significant limitations of present-day solid state sources of nonclassical light, including color tunability, fidelity and ease of assembly into devices.
This program is a concerted effort both on colloidal synthesis of complex multicomponent semiconductor nanocrystals and on cutting edge photophysical studies at the single nanocrystal level. This should enable new types of emitters of nonclassical light, as well as provide a platform for the implementation of recently suggested schemes in quantum optics which have never been experimentally demonstrated. These include room temperature sources of exactly two (or more) photons, correlated photon pairs from quantum dot molecules and entanglement based on time reordering. Fulfilling the optical and material requirements from this type of system, including photostability, control of carrier-carrier interactions, and a large quantum yield, will inevitably reveal some of the fundamental properties of coupled carriers in strongly confined structures.
Max ERC Funding
2 000 000 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym CombiCompGeom
Project Combinatorial Aspects of Computational Geometry
Researcher (PI) Natan Rubin
Host Institution (HI) BEN-GURION UNIVERSITY OF THE NEGEV
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The project focuses on the interface between computational and combinatorial geometry.
Geometric problems emerge in a variety of computational fields that interact with the physical world.
The performance of geometric algorithms is determined by the description complexity of their underlying combinatorial structures. Hence, most theoretical challenges faced by computational geometry are of a distinctly combinatorial nature.
In the past two decades, computational geometry has been revolutionized by the powerful combination of random sampling techniques with the abstract machinery of geometric arrangements. These insights were used, in turn, to establish state-of-the-art results in combinatorial geometry. Nevertheless, a number of fundamental problems remained open and resisted numerous attempts to solve them.
Motivated by the recent breakthrough results, in which the PI played a central role, we propose two exciting lines of study with the potential to change the landscape of this field.
The first research direction concerns the complexity of Voronoi diagrams -- arguably the most common structures in computational geometry.
The second direction concerns combinatorial and algorithmic aspects of geometric intersection structures, including some fundamental open problems in geometric transversal theory. Many of these questions are motivated by geometric variants of general covering and packing problems, and all efficient approximation schemes for them must rely on the intrinsic properties of geometric graphs and hypergraphs.
Any progress in responding to these challenges will constitute a major breakthrough in both computational and combinatorial geometry.
Summary
The project focuses on the interface between computational and combinatorial geometry.
Geometric problems emerge in a variety of computational fields that interact with the physical world.
The performance of geometric algorithms is determined by the description complexity of their underlying combinatorial structures. Hence, most theoretical challenges faced by computational geometry are of a distinctly combinatorial nature.
In the past two decades, computational geometry has been revolutionized by the powerful combination of random sampling techniques with the abstract machinery of geometric arrangements. These insights were used, in turn, to establish state-of-the-art results in combinatorial geometry. Nevertheless, a number of fundamental problems remained open and resisted numerous attempts to solve them.
Motivated by the recent breakthrough results, in which the PI played a central role, we propose two exciting lines of study with the potential to change the landscape of this field.
The first research direction concerns the complexity of Voronoi diagrams -- arguably the most common structures in computational geometry.
The second direction concerns combinatorial and algorithmic aspects of geometric intersection structures, including some fundamental open problems in geometric transversal theory. Many of these questions are motivated by geometric variants of general covering and packing problems, and all efficient approximation schemes for them must rely on the intrinsic properties of geometric graphs and hypergraphs.
Any progress in responding to these challenges will constitute a major breakthrough in both computational and combinatorial geometry.
Max ERC Funding
1 303 750 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym COMPCAMERAANALYZ
Project Understanding Designing and Analyzing Computational Cameras
Researcher (PI) Anat Levin
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary Computational cameras go beyond 2D images and allow the extraction of more dimensions from the visual world such as depth, multiple viewpoints and multiple illumination conditions. They also allow us to overcome some of the traditional photography challenges such as defocus blur, motion blur, noise and resolution. The increasing variety of computational cameras is raising the need for a meaningful comparison across camera types. We would like to understand which cameras are better for specific tasks, which aspects of a camera make it better than others and what is the best performance we can hope to achieve.
Our 2008 paper introduced a general framework to address the design and analysis of computational cameras. A camera is modeled as a linear projection in ray space. Decoding the camera data then deals with inverting the linear projection. Since the number of sensor measurements is usually much smaller than the number of rays, the inversion must be treated as a Bayesian inference problem accounting for prior knowledge on the world.
Despite significant progress which has been made in the recent years, the space of computational cameras is still far from being understood.
Computational camera analysis raises the following research challenges: 1) What is a good way to model prior knowledge on ray space? 2) Seeking efficient inference algorithms and robust ways to decode the world from the camera measurements. 3) Evaluating the expected reconstruction accuracy of a given camera. 4) Using the expected reconstruction performance for evaluating and comparing camera types. 5) What is the best camera? Can we derive upper bounds on the optimal performance?
We propose research on all aspects of computational camera design and analysis. We propose new prior models which will significantly simplify the inference and evaluation tasks. We also propose new ways to bound and evaluate computational cameras with existing priors.
Summary
Computational cameras go beyond 2D images and allow the extraction of more dimensions from the visual world such as depth, multiple viewpoints and multiple illumination conditions. They also allow us to overcome some of the traditional photography challenges such as defocus blur, motion blur, noise and resolution. The increasing variety of computational cameras is raising the need for a meaningful comparison across camera types. We would like to understand which cameras are better for specific tasks, which aspects of a camera make it better than others and what is the best performance we can hope to achieve.
Our 2008 paper introduced a general framework to address the design and analysis of computational cameras. A camera is modeled as a linear projection in ray space. Decoding the camera data then deals with inverting the linear projection. Since the number of sensor measurements is usually much smaller than the number of rays, the inversion must be treated as a Bayesian inference problem accounting for prior knowledge on the world.
Despite significant progress which has been made in the recent years, the space of computational cameras is still far from being understood.
Computational camera analysis raises the following research challenges: 1) What is a good way to model prior knowledge on ray space? 2) Seeking efficient inference algorithms and robust ways to decode the world from the camera measurements. 3) Evaluating the expected reconstruction accuracy of a given camera. 4) Using the expected reconstruction performance for evaluating and comparing camera types. 5) What is the best camera? Can we derive upper bounds on the optimal performance?
We propose research on all aspects of computational camera design and analysis. We propose new prior models which will significantly simplify the inference and evaluation tasks. We also propose new ways to bound and evaluate computational cameras with existing priors.
Max ERC Funding
756 845 €
Duration
Start date: 2010-12-01, End date: 2015-11-30
Project acronym COMPECON
Project Complexity and Simplicity in Economic Mechanisms
Researcher (PI) Noam NISAN
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary As more and more economic activity is moving to the Internet, familiar economic mechanisms are being deployed
at unprecedented scales of size, speed, and complexity. In many cases this new complexity becomes the defining
feature of the deployed economic mechanism and the quantitative difference becomes a key qualitative one.
A well-studied example of such situations is how the humble single-item auction suddenly becomes a
billion-times repeated online ad auction, or even becomes a combinatorial auction with exponentially
many possible outcomes. Similar complexity explosions occur with various markets, with information
dissemination, with pricing structures, and with many other economic mechanisms.
The aim of this proposal is to study the role and implications of such complexity and to start
developing a coherent economic theory that can handle it. We aim to identify various measures of
complexity that are crucial bottlenecks and study them. Examples of such complexities include the
amount of access to data, the length of the description of a mechanism, its communication requirements,
the cognitive complexity required from users, and, of course, the associated computational complexity.
On one hand we will attempt finding ways of effectively dealing with complexity when it is needed, and on
the other hand, attempt avoiding complexity, when possible, replacing it with ``simple'' alternatives
without incurring too large of a loss.
Summary
As more and more economic activity is moving to the Internet, familiar economic mechanisms are being deployed
at unprecedented scales of size, speed, and complexity. In many cases this new complexity becomes the defining
feature of the deployed economic mechanism and the quantitative difference becomes a key qualitative one.
A well-studied example of such situations is how the humble single-item auction suddenly becomes a
billion-times repeated online ad auction, or even becomes a combinatorial auction with exponentially
many possible outcomes. Similar complexity explosions occur with various markets, with information
dissemination, with pricing structures, and with many other economic mechanisms.
The aim of this proposal is to study the role and implications of such complexity and to start
developing a coherent economic theory that can handle it. We aim to identify various measures of
complexity that are crucial bottlenecks and study them. Examples of such complexities include the
amount of access to data, the length of the description of a mechanism, its communication requirements,
the cognitive complexity required from users, and, of course, the associated computational complexity.
On one hand we will attempt finding ways of effectively dealing with complexity when it is needed, and on
the other hand, attempt avoiding complexity, when possible, replacing it with ``simple'' alternatives
without incurring too large of a loss.
Max ERC Funding
2 026 706 €
Duration
Start date: 2017-05-01, End date: 2022-04-30
Project acronym CONC-VIA-RIEMANN
Project High-Dimensional Convexity, Isoperimetry and Concentration via a Riemannian Vantage Point
Researcher (PI) Emanuel Milman
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE1, ERC-2014-STG
Summary "In recent years, the importance of superimposing the contribution of the measure to that of the metric, in determining the underlying space's (generalized Ricci) curvature, has been clarified in the works of Lott, Sturm, Villani and others, following the definition of Curvature-Dimension introduced by Bakry and Emery. We wish to systematically incorporate
this important idea of considering the measure and metric in tandem, in the study of questions pertaining to isoperimetric and concentration properties of convex domains in high-dimensional Euclidean space, where a-priori there is only a trivial metric (Euclidean) and trivial measure (Lebesgue).
The first step of enriching the class of uniform measures on convex domains to that of non-negatively curved (""log-concave"") measures in Euclidean space has been very successfully implemented in the last decades, leading to substantial progress in our understanding of volumetric properties of convex domains, mostly regarding concentration of linear functionals. However, the potential advantages of altering the Euclidean metric into a more general Riemannian one or exploiting related Riemannian structures have not been systematically explored. Our main paradigm is that in order to progress in non-linear questions pertaining to concentration in Euclidean space, it is imperative to cast and study these problems in the more general Riemannian context.
As witnessed by our own work over the last years, we expect that broadening the scope and incorporating tools from the Riemannian world will lead to significant progress in our understanding of the qualitative and quantitative structure of isoperimetric minimizers in the purely Euclidean setting. Such progress would have dramatic impact on long-standing fundamental conjectures regarding concentration of measure on high-dimensional convex domains, as well as other closely related fields such as Probability Theory, Learning Theory, Random Matrix Theory and Algorithmic Geometry."
Summary
"In recent years, the importance of superimposing the contribution of the measure to that of the metric, in determining the underlying space's (generalized Ricci) curvature, has been clarified in the works of Lott, Sturm, Villani and others, following the definition of Curvature-Dimension introduced by Bakry and Emery. We wish to systematically incorporate
this important idea of considering the measure and metric in tandem, in the study of questions pertaining to isoperimetric and concentration properties of convex domains in high-dimensional Euclidean space, where a-priori there is only a trivial metric (Euclidean) and trivial measure (Lebesgue).
The first step of enriching the class of uniform measures on convex domains to that of non-negatively curved (""log-concave"") measures in Euclidean space has been very successfully implemented in the last decades, leading to substantial progress in our understanding of volumetric properties of convex domains, mostly regarding concentration of linear functionals. However, the potential advantages of altering the Euclidean metric into a more general Riemannian one or exploiting related Riemannian structures have not been systematically explored. Our main paradigm is that in order to progress in non-linear questions pertaining to concentration in Euclidean space, it is imperative to cast and study these problems in the more general Riemannian context.
As witnessed by our own work over the last years, we expect that broadening the scope and incorporating tools from the Riemannian world will lead to significant progress in our understanding of the qualitative and quantitative structure of isoperimetric minimizers in the purely Euclidean setting. Such progress would have dramatic impact on long-standing fundamental conjectures regarding concentration of measure on high-dimensional convex domains, as well as other closely related fields such as Probability Theory, Learning Theory, Random Matrix Theory and Algorithmic Geometry."
Max ERC Funding
1 194 190 €
Duration
Start date: 2015-10-01, End date: 2020-09-30
Project acronym CONFINEDCHEM
Project Synthetic Confined Environments as Tools for Manipulating Chemical Reactivities and Preparing New Nanostructures
Researcher (PI) Rafal Klajn
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE5, ERC-2013-StG
Summary "Nature has long inspired chemists with its abilities to stabilize ephemeral chemical species, to perform chemical reactions with unprecedented rates and selectivities, and to synthesize complex molecules and fascinating inorganic nanostructures. What natural systems consistently exploit - which is yet fundamentally different from how chemists perform reactions - is their aspect of nanoscale confinement. The goal of the proposed research program is to integrate the worlds of organic and inorganic colloidal chemistry by means of manipulating chemical reactivities and synthesizing novel molecules and nanostructures inside synthetic confined environments created using novel, unconventional approaches based on inorganic, nanostructured building blocks. The three types of confined spaces we propose are as follows: 1) nanopores within reversibly self-assembling colloidal crystals (""dynamic nanoflasks""), 2) cavities of bowl-shaped metallic nanoparticles (NPs), and 3) surfaces of spherical NPs. By taking advantage of these unique tools, we will attempt to develop, respectively, 1) a conceptually new method for catalyzing chemical reactions using light, 2) nanoscale inclusion chemistry (a field based on host-guest ""complexes"" assembled form nanosized components) and 3) to use NPs as platforms for the development of new organic reactions. While these objectives are predominantly of a fundamental nature, they can easily evolve into a variety of practical applications. Specifically, we will pursue diverse goals such as the preparation of 1) a new family of inverse opals (with potentially fascinating optical and mechanical properties), 2) artificial chaperones (NPs assisting in protein folding), and 3) size- and shape-controlled polymeric vesicles. Overall, it is believed that this marriage of organic and colloidal chemistry has the potential to change the fundamental way we perform chemical reactions, paving the way to the discovery of new phenomena and unique structures."
Summary
"Nature has long inspired chemists with its abilities to stabilize ephemeral chemical species, to perform chemical reactions with unprecedented rates and selectivities, and to synthesize complex molecules and fascinating inorganic nanostructures. What natural systems consistently exploit - which is yet fundamentally different from how chemists perform reactions - is their aspect of nanoscale confinement. The goal of the proposed research program is to integrate the worlds of organic and inorganic colloidal chemistry by means of manipulating chemical reactivities and synthesizing novel molecules and nanostructures inside synthetic confined environments created using novel, unconventional approaches based on inorganic, nanostructured building blocks. The three types of confined spaces we propose are as follows: 1) nanopores within reversibly self-assembling colloidal crystals (""dynamic nanoflasks""), 2) cavities of bowl-shaped metallic nanoparticles (NPs), and 3) surfaces of spherical NPs. By taking advantage of these unique tools, we will attempt to develop, respectively, 1) a conceptually new method for catalyzing chemical reactions using light, 2) nanoscale inclusion chemistry (a field based on host-guest ""complexes"" assembled form nanosized components) and 3) to use NPs as platforms for the development of new organic reactions. While these objectives are predominantly of a fundamental nature, they can easily evolve into a variety of practical applications. Specifically, we will pursue diverse goals such as the preparation of 1) a new family of inverse opals (with potentially fascinating optical and mechanical properties), 2) artificial chaperones (NPs assisting in protein folding), and 3) size- and shape-controlled polymeric vesicles. Overall, it is believed that this marriage of organic and colloidal chemistry has the potential to change the fundamental way we perform chemical reactions, paving the way to the discovery of new phenomena and unique structures."
Max ERC Funding
1 499 992 €
Duration
Start date: 2013-10-01, End date: 2018-09-30
Project acronym COSMICEXPLOSIONS
Project The nature of cosmic explosions
Researcher (PI) Avishay Gal-Yam
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE9, ERC-2012-StG_20111012
Summary Cosmic explosions, the violent deaths of stars, play a crucial role in many of the most interesting open questions in physics today. These events serve as “cosmic accelerators” for ultra-high-energy particles that are beyond reach for even to most powerful terrestrial accelerators, as well as distant sources for elusive neutrinos. Explosions leave behind compact neutron stars and black hole remnants, natural laboratories to study strong gravity. Acting as cosmic furnaces, these explosions driven the chemical evolution of the Universe Cosmic explosions trigger and inhibit star formation processes, and drive galactic evolution (“feedback”). Distances measured using supernova explosions as standard candles brought about the modern revolution in our view of the accelerating Universe, driven by enigmatic “dark energy”. Understanding the nature of cosmic explosions of all types is thus an extremely well-motivated endeavour. I have been studying cosmic explosions for over a decade, and since the earliest stages of my career, have followed an ambition to figure out the nature of cosmic explosions of all types, and to search for new types of explosions. Having already made several key discoveries, I now propose to undertake a comprehensive program to systematically tackle this problem.I review below the progress made in this field and the breakthrough results we have achieved so far, and propose to climb the next step in this scientific and technological ladder, combining new powerful surveys with comprehensive multi-wavelength and multi-disciplinary (observational and theoretical) analysis. My strategy is based on a combination of two main approaches: detailed studies of single objects which serve as keys to specific questions; and systematic studies of large samples, some that I have, for the first time, been able to assemble and analyze, and those expected from forthcoming efforts. Both approaches have already yielded tantalizing results.
Summary
Cosmic explosions, the violent deaths of stars, play a crucial role in many of the most interesting open questions in physics today. These events serve as “cosmic accelerators” for ultra-high-energy particles that are beyond reach for even to most powerful terrestrial accelerators, as well as distant sources for elusive neutrinos. Explosions leave behind compact neutron stars and black hole remnants, natural laboratories to study strong gravity. Acting as cosmic furnaces, these explosions driven the chemical evolution of the Universe Cosmic explosions trigger and inhibit star formation processes, and drive galactic evolution (“feedback”). Distances measured using supernova explosions as standard candles brought about the modern revolution in our view of the accelerating Universe, driven by enigmatic “dark energy”. Understanding the nature of cosmic explosions of all types is thus an extremely well-motivated endeavour. I have been studying cosmic explosions for over a decade, and since the earliest stages of my career, have followed an ambition to figure out the nature of cosmic explosions of all types, and to search for new types of explosions. Having already made several key discoveries, I now propose to undertake a comprehensive program to systematically tackle this problem.I review below the progress made in this field and the breakthrough results we have achieved so far, and propose to climb the next step in this scientific and technological ladder, combining new powerful surveys with comprehensive multi-wavelength and multi-disciplinary (observational and theoretical) analysis. My strategy is based on a combination of two main approaches: detailed studies of single objects which serve as keys to specific questions; and systematic studies of large samples, some that I have, for the first time, been able to assemble and analyze, and those expected from forthcoming efforts. Both approaches have already yielded tantalizing results.
Max ERC Funding
1 499 302 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym CoupledNC
Project Coupled Nanocrystal Molecules: Quantum coupling effects via chemical coupling of colloidal nanocrystals
Researcher (PI) Uri BANIN
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE4, ERC-2016-ADG
Summary Coupling of atoms is the basis of chemistry, yielding the beauty and richness of molecules and materials. Herein I introduce nanocrystal chemistry: the use of semiconductor nanocrystals (NCs) as artificial atoms to form NC molecules that are chemically, structurally and physically coupled. The unique emergent quantum mechanical consequences of the NCs coupling will be studied and tailored to yield a chemical-quantum palette: coherent coupling of NC exciton states; dual color single photon emitters functional also as photo-switchable chromophores in super-resolution fluorescence microscopy; electrically switchable single NC photon emitters for utilization as taggants for neuronal activity and as chromophores in displays; new NC structures for lasing; and coupled quasi-1D NC chains manifesting mini-band formation, and tailored for a quantum-cascade effect for IR photon emission. A novel methodology of controlled oriented attachment of NC building blocks (in particular of core/shell NCs) will be presented to realize the coupled NCs molecules. For this a new type of Janus NC building block will be developed, and used as an element in a Lego-type construction of double quantum dots (dimers), heterodimers coupling two different types of NCs, and more complex NC coupled quantum structures. To realize this NC chemistry approach, surface control is essential, which will be achieved via investigation of the chemical and dynamical properties of the NCs surface ligands layer. As outcome I can expect to decipher NCs surface chemistry and dynamics, including its size dependence, and to introduce Janus NCs with chemically distinct and selectively modified surface faces. From this I will develop a new step-wise approach for synthesis of coupled NCs molecules and reveal the consequences of quantum coupling in them. This will inspire theoretical and further experimental work and will set the stage for the development of the diverse potential applications of coupled NC molecules.
Summary
Coupling of atoms is the basis of chemistry, yielding the beauty and richness of molecules and materials. Herein I introduce nanocrystal chemistry: the use of semiconductor nanocrystals (NCs) as artificial atoms to form NC molecules that are chemically, structurally and physically coupled. The unique emergent quantum mechanical consequences of the NCs coupling will be studied and tailored to yield a chemical-quantum palette: coherent coupling of NC exciton states; dual color single photon emitters functional also as photo-switchable chromophores in super-resolution fluorescence microscopy; electrically switchable single NC photon emitters for utilization as taggants for neuronal activity and as chromophores in displays; new NC structures for lasing; and coupled quasi-1D NC chains manifesting mini-band formation, and tailored for a quantum-cascade effect for IR photon emission. A novel methodology of controlled oriented attachment of NC building blocks (in particular of core/shell NCs) will be presented to realize the coupled NCs molecules. For this a new type of Janus NC building block will be developed, and used as an element in a Lego-type construction of double quantum dots (dimers), heterodimers coupling two different types of NCs, and more complex NC coupled quantum structures. To realize this NC chemistry approach, surface control is essential, which will be achieved via investigation of the chemical and dynamical properties of the NCs surface ligands layer. As outcome I can expect to decipher NCs surface chemistry and dynamics, including its size dependence, and to introduce Janus NCs with chemically distinct and selectively modified surface faces. From this I will develop a new step-wise approach for synthesis of coupled NCs molecules and reveal the consequences of quantum coupling in them. This will inspire theoretical and further experimental work and will set the stage for the development of the diverse potential applications of coupled NC molecules.
Max ERC Funding
2 499 750 €
Duration
Start date: 2017-11-01, End date: 2022-10-31
Project acronym CRYOMATH
Project Cryo-electron microscopy: mathematical foundations and algorithms
Researcher (PI) Yoel SHKOLNISKY
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE1, ERC-2016-COG
Summary The importance of understanding the functions of the basic building blocks of life, such as proteins, cannot be overstated (as asserted by two recent Nobel prizes in Chemistry), as this understanding unravels the mechanisms that control all organisms. The critical step towards such an understanding is to reveal the structures of these building blocks. A leading method for resolving such structures is cryo-electron microscopy (cryo-EM), in which the structure of a molecule is recovered from its images taken by an electron microscope, by using sophisticated mathematical algorithms (to which my group has made several key mathematical and algorithmic contributions). Due to hardware breakthroughs in the past three years, cryo-EM has made a giant leap forward, introducing capabilities that until recently were unimaginable, opening an opportunity to revolutionize our biological understanding. As extracting information from cryo-EM experiments completely relies on mathematical algorithms, the method’s deep mathematical challenges that have emerged must be solved as soon as possible. Only then cryo-EM could realize its nearly inconceivable potential. These challenges, for which no adequate solutions exist (or none at all), focus on integrating information from huge sets of extremely noisy images reliability and efficiently. Based on the experience of my research group in developing algorithms for cryo-EM data processing, gained during the past eight years, we will address the three key open challenges of the field – a) deriving reliable and robust reconstruction algorithms from cryo-EM data, b) developing tools to process heterogeneous cryo-EM data sets, and c) devising validation and quality measures for structures determined from cryo-EM data. The fourth goal of the project, which ties all goals together and promotes the broad interdisciplinary impact of the project, is to merge all our algorithms into a software platform for state-of-the-art processing of cryo-EM data.
Summary
The importance of understanding the functions of the basic building blocks of life, such as proteins, cannot be overstated (as asserted by two recent Nobel prizes in Chemistry), as this understanding unravels the mechanisms that control all organisms. The critical step towards such an understanding is to reveal the structures of these building blocks. A leading method for resolving such structures is cryo-electron microscopy (cryo-EM), in which the structure of a molecule is recovered from its images taken by an electron microscope, by using sophisticated mathematical algorithms (to which my group has made several key mathematical and algorithmic contributions). Due to hardware breakthroughs in the past three years, cryo-EM has made a giant leap forward, introducing capabilities that until recently were unimaginable, opening an opportunity to revolutionize our biological understanding. As extracting information from cryo-EM experiments completely relies on mathematical algorithms, the method’s deep mathematical challenges that have emerged must be solved as soon as possible. Only then cryo-EM could realize its nearly inconceivable potential. These challenges, for which no adequate solutions exist (or none at all), focus on integrating information from huge sets of extremely noisy images reliability and efficiently. Based on the experience of my research group in developing algorithms for cryo-EM data processing, gained during the past eight years, we will address the three key open challenges of the field – a) deriving reliable and robust reconstruction algorithms from cryo-EM data, b) developing tools to process heterogeneous cryo-EM data sets, and c) devising validation and quality measures for structures determined from cryo-EM data. The fourth goal of the project, which ties all goals together and promotes the broad interdisciplinary impact of the project, is to merge all our algorithms into a software platform for state-of-the-art processing of cryo-EM data.
Max ERC Funding
1 751 250 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym CSG
Project C° symplectic geometry
Researcher (PI) Lev Buhovski
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2017-STG
Summary "The objective of this proposal is to study ""continuous"" (or C^0) objects, as well as C^0 properties of smooth objects, in the field of symplectic geometry and topology. C^0 symplectic geometry has seen spectacular progress in recent years, drawing attention of mathematicians from various background. The proposed study aims to discover new fascinating C^0 phenomena in symplectic geometry.
One circle of questions concerns symplectic and Hamiltonian homeomorphisms. Recent studies indicate that these objects possess both rigidity and flexibility, appearing in surprising and counter-intuitive ways. Our understanding of symplectic and Hamiltonian homeomorphisms is far from being satisfactory, and here we intend to study questions related to action of symplectic homeomorphisms on submanifolds. Some other questions are about Hamiltonian homeomorphisms in relation to the celebrated Arnold conjecture. The PI suggests to study spectral invariants of continuous Hamiltonian flows, which allow to formulate the C^0 Arnold conjecture in higher dimensions. Another central problem that the PI will work on is the C^0 flux conjecture.
A second circle of questions is about the Poisson bracket operator, and its functional-theoretic properties. The first question concerns the lower bound for the Poisson bracket invariant of a cover, conjectured by L. Polterovich who indicated relations between this problem and quantum mechanics. Another direction aims to study the C^0 rigidity versus flexibility of the L_p norm of the Poisson bracket. Despite a recent progress in dimension two showing rigidity, very little is known in higher dimensions. The PI proposes to use combination of tools from topology and from hard analysis in order to address this question, whose solution will be a big step towards understanding functional-theoretic properties of the Poisson bracket operator."
Summary
"The objective of this proposal is to study ""continuous"" (or C^0) objects, as well as C^0 properties of smooth objects, in the field of symplectic geometry and topology. C^0 symplectic geometry has seen spectacular progress in recent years, drawing attention of mathematicians from various background. The proposed study aims to discover new fascinating C^0 phenomena in symplectic geometry.
One circle of questions concerns symplectic and Hamiltonian homeomorphisms. Recent studies indicate that these objects possess both rigidity and flexibility, appearing in surprising and counter-intuitive ways. Our understanding of symplectic and Hamiltonian homeomorphisms is far from being satisfactory, and here we intend to study questions related to action of symplectic homeomorphisms on submanifolds. Some other questions are about Hamiltonian homeomorphisms in relation to the celebrated Arnold conjecture. The PI suggests to study spectral invariants of continuous Hamiltonian flows, which allow to formulate the C^0 Arnold conjecture in higher dimensions. Another central problem that the PI will work on is the C^0 flux conjecture.
A second circle of questions is about the Poisson bracket operator, and its functional-theoretic properties. The first question concerns the lower bound for the Poisson bracket invariant of a cover, conjectured by L. Polterovich who indicated relations between this problem and quantum mechanics. Another direction aims to study the C^0 rigidity versus flexibility of the L_p norm of the Poisson bracket. Despite a recent progress in dimension two showing rigidity, very little is known in higher dimensions. The PI proposes to use combination of tools from topology and from hard analysis in order to address this question, whose solution will be a big step towards understanding functional-theoretic properties of the Poisson bracket operator."
Max ERC Funding
1 345 282 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym DCENSY
Project Doping, Charge Transfer and Energy Flow in Hybrid Nanoparticle Systems
Researcher (PI) Uri Banin
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE4, ERC-2009-AdG
Summary We target a frontier in nanocrystal science of combining disparate materials into a single hybrid nanosystem. This offers an intriguing route to engineer nanomaterials with multiple functionalities in ways that are not accessible in bulk materials or in molecules. Such control of novel material combinations on a single nanoparticle or in a super-structure of assembled nanoparticles, presents alongside with the synthesis challenges, fundamental questions concerning the physical attributes of nanoscale systems. My goals are to create new highly controlled hybrid nanoparticle systems, focusing on combinations of semiconductors and metals, and to decipher the fundamental principles governing doping in nanoparticles and charge and energy transfer processes among components of the hybrid systems. The research addresses several key challenges: First, in synthesis, combining disparate material components into one hybrid nanoparticle system. Second, in self assembly, organizing a combination of semiconductor (SC) and metal nanoparticle building blocks into hybrid systems with controlled architecture. Third in fundamental physico-chemical questions pertaining to the unique attributes of the hybrid systems, constituting a key component of the research. A first aspect concerns doping of SC nanoparticles with metal atoms. A second aspect concerns light-induced charge transfer between the SC part and metal parts of the hybrid constructs. A third related aspect concerns energy transfer processes between the SC and metal components and the interplay between near-field enhancement and fluorescence quenching effects. Due to the new properties, significant impact on nanocrystal applications in solar energy harvesting, biological tagging, sensing, optics and electropotics is expected.
Summary
We target a frontier in nanocrystal science of combining disparate materials into a single hybrid nanosystem. This offers an intriguing route to engineer nanomaterials with multiple functionalities in ways that are not accessible in bulk materials or in molecules. Such control of novel material combinations on a single nanoparticle or in a super-structure of assembled nanoparticles, presents alongside with the synthesis challenges, fundamental questions concerning the physical attributes of nanoscale systems. My goals are to create new highly controlled hybrid nanoparticle systems, focusing on combinations of semiconductors and metals, and to decipher the fundamental principles governing doping in nanoparticles and charge and energy transfer processes among components of the hybrid systems. The research addresses several key challenges: First, in synthesis, combining disparate material components into one hybrid nanoparticle system. Second, in self assembly, organizing a combination of semiconductor (SC) and metal nanoparticle building blocks into hybrid systems with controlled architecture. Third in fundamental physico-chemical questions pertaining to the unique attributes of the hybrid systems, constituting a key component of the research. A first aspect concerns doping of SC nanoparticles with metal atoms. A second aspect concerns light-induced charge transfer between the SC part and metal parts of the hybrid constructs. A third related aspect concerns energy transfer processes between the SC and metal components and the interplay between near-field enhancement and fluorescence quenching effects. Due to the new properties, significant impact on nanocrystal applications in solar energy harvesting, biological tagging, sensing, optics and electropotics is expected.
Max ERC Funding
2 499 000 €
Duration
Start date: 2010-06-01, End date: 2015-05-31
Project acronym DeepFace
Project Understanding Deep Face Recognition
Researcher (PI) Lior Wolf
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary Face recognition is a fascinating domain: no other domain seems to present as much value when analysing casual photos; it is one of the few domains in machine learning in which millions of classes are routinely learned; and the trade-off between subtle inter-identity variations and pronounced intra-identity variations forms a unique challenge.
The advent of deep learning has brought machines to what is considered a human level of performance. However, there are many research questions that are left open. At the top most level, we ask two questions: what is unique about faces in comparison to other recognition tasks that also employ deep networks and how can we make the next leap in performance of automatic face recognition?
We consider three domains of research. The first is the study of methods that promote effective transfer learning. This is crucial since all state of the art face recognition methods rely on transfer learning. The second domain is the study of the tradeoffs that govern the optimal utilization of the training data and how the properties of the training data affect the optimal network design. The third domain is the post transfer utilization of the learned deep networks, where given the representations of a pair of face images, we seek to compare them in the most accurate way.
Throughout this proposal, we put an emphasis on theoretical reasoning. I aim to support the developed methods by a theoretical framework that would both justify their usage as well as provide concrete guidelines for using them. My goal of achieving a leap forward in performance through a level of theoretical analysis that is unparalleled in object recognition, makes our research agenda truly high-risk/ high-gains. I have been in the forefront of face recognition for the last 8 years and my lab's recent achievements in deep learning suggest that we will be able to carry out this research. To further support its feasibility, we present very promising initial results.
Summary
Face recognition is a fascinating domain: no other domain seems to present as much value when analysing casual photos; it is one of the few domains in machine learning in which millions of classes are routinely learned; and the trade-off between subtle inter-identity variations and pronounced intra-identity variations forms a unique challenge.
The advent of deep learning has brought machines to what is considered a human level of performance. However, there are many research questions that are left open. At the top most level, we ask two questions: what is unique about faces in comparison to other recognition tasks that also employ deep networks and how can we make the next leap in performance of automatic face recognition?
We consider three domains of research. The first is the study of methods that promote effective transfer learning. This is crucial since all state of the art face recognition methods rely on transfer learning. The second domain is the study of the tradeoffs that govern the optimal utilization of the training data and how the properties of the training data affect the optimal network design. The third domain is the post transfer utilization of the learned deep networks, where given the representations of a pair of face images, we seek to compare them in the most accurate way.
Throughout this proposal, we put an emphasis on theoretical reasoning. I aim to support the developed methods by a theoretical framework that would both justify their usage as well as provide concrete guidelines for using them. My goal of achieving a leap forward in performance through a level of theoretical analysis that is unparalleled in object recognition, makes our research agenda truly high-risk/ high-gains. I have been in the forefront of face recognition for the last 8 years and my lab's recent achievements in deep learning suggest that we will be able to carry out this research. To further support its feasibility, we present very promising initial results.
Max ERC Funding
1 696 888 €
Duration
Start date: 2017-05-01, End date: 2022-04-30
Project acronym DeepInternal
Project Going Deep and Blind with Internal Statistics
Researcher (PI) Michal IRANI
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a “blind” manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
Summary
Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a “blind” manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
Max ERC Funding
2 466 940 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym DELPHI
Project Computing Answers to Complex Questions in Broad Domains
Researcher (PI) Jonathan Berant
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary The explosion of information around us has democratized knowledge and transformed its availability for
people around the world. Still, since information is mediated through automated systems, access is bounded
by their ability to understand language.
Consider an economist asking “What fraction of the top-5 growing countries last year raised their co2 emission?”.
While the required information is available, answering such complex questions automatically is
not possible. Current question answering systems can answer simple questions in broad domains, or complex
questions in narrow domains. However, broad and complex questions are beyond the reach of state-of-the-art.
This is because systems are unable to decompose questions into their parts, and find the relevant information
in multiple sources. Further, as answering such questions is hard for people, collecting large datasets to train
such models is prohibitive.
In this proposal I ask: Can computers answer broad and complex questions that require reasoning over
multiple modalities? I argue that by synthesizing the advantages of symbolic and distributed representations
the answer will be “yes”. My thesis is that symbolic representations are suitable for meaning composition, as
they provide interpretability, coverage, and modularity. Complementarily, distributed representations (learned
by neural nets) excel at capturing the fuzziness of language. I propose a framework where complex questions
are symbolically decomposed into sub-questions, each is answered with a neural network, and the final answer
is computed from all gathered information.
This research tackles foundational questions in language understanding. What is the right representation
for reasoning in language? Can models learn to perform complex actions in the face of paucity of data?
Moreover, my research, if successful, will transform how we interact with machines, and define a role for
them as research assistants in science, education, and our daily life.
Summary
The explosion of information around us has democratized knowledge and transformed its availability for
people around the world. Still, since information is mediated through automated systems, access is bounded
by their ability to understand language.
Consider an economist asking “What fraction of the top-5 growing countries last year raised their co2 emission?”.
While the required information is available, answering such complex questions automatically is
not possible. Current question answering systems can answer simple questions in broad domains, or complex
questions in narrow domains. However, broad and complex questions are beyond the reach of state-of-the-art.
This is because systems are unable to decompose questions into their parts, and find the relevant information
in multiple sources. Further, as answering such questions is hard for people, collecting large datasets to train
such models is prohibitive.
In this proposal I ask: Can computers answer broad and complex questions that require reasoning over
multiple modalities? I argue that by synthesizing the advantages of symbolic and distributed representations
the answer will be “yes”. My thesis is that symbolic representations are suitable for meaning composition, as
they provide interpretability, coverage, and modularity. Complementarily, distributed representations (learned
by neural nets) excel at capturing the fuzziness of language. I propose a framework where complex questions
are symbolically decomposed into sub-questions, each is answered with a neural network, and the final answer
is computed from all gathered information.
This research tackles foundational questions in language understanding. What is the right representation
for reasoning in language? Can models learn to perform complex actions in the face of paucity of data?
Moreover, my research, if successful, will transform how we interact with machines, and define a role for
them as research assistants in science, education, and our daily life.
Max ERC Funding
1 499 375 €
Duration
Start date: 2019-04-01, End date: 2024-03-31
Project acronym DEPENDENTCLASSES
Project Model theory and its applications: dependent classes
Researcher (PI) Saharon Shelah
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2013-ADG
Summary Model theory deals with general classes of structures (called models).
Specific examples of such classes are: the class of rings or the class of
algebraically closed fields.
It turns out that counting the so-called complete types over models in the
class has an important role in the development of model theory in general and
stability theory in particular.
Stable classes are those with relatively few complete types (over structures
from the class); understanding stable classes has been central in model theory
and its applications.
Recently, I have proved a new dichotomy among the unstable classes:
Instead of counting all the complete types, they are counted up to conjugacy.
Classes which have few types up to conjugacy are proved to be so-called
``dependent'' classes (which have also been called NIP classes).
I have developed (under reasonable restrictions) a ``recounting theorem'',
parallel to the basic theorems of stability theory.
I have started to develop some of the basic properties of this new approach.
The goal of the current project is to develop systematically the theory of
dependent classes. The above mentioned results give strong indication that this
new theory can be eventually as useful as the (by now the classical) stability
theory. In particular, it covers many well known classes which stability theory
cannot treat.
Summary
Model theory deals with general classes of structures (called models).
Specific examples of such classes are: the class of rings or the class of
algebraically closed fields.
It turns out that counting the so-called complete types over models in the
class has an important role in the development of model theory in general and
stability theory in particular.
Stable classes are those with relatively few complete types (over structures
from the class); understanding stable classes has been central in model theory
and its applications.
Recently, I have proved a new dichotomy among the unstable classes:
Instead of counting all the complete types, they are counted up to conjugacy.
Classes which have few types up to conjugacy are proved to be so-called
``dependent'' classes (which have also been called NIP classes).
I have developed (under reasonable restrictions) a ``recounting theorem'',
parallel to the basic theorems of stability theory.
I have started to develop some of the basic properties of this new approach.
The goal of the current project is to develop systematically the theory of
dependent classes. The above mentioned results give strong indication that this
new theory can be eventually as useful as the (by now the classical) stability
theory. In particular, it covers many well known classes which stability theory
cannot treat.
Max ERC Funding
1 748 000 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym DG-PESP-CS
Project Deterministic Generation of Polarization Entangled single Photons Cluster States
Researcher (PI) David Gershoni
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Advanced Grant (AdG), PE2, ERC-2015-AdG
Summary Measurement based quantum computing is one of the most fault-tolerant architectures proposed for quantum information processing. It opens the possibility of performing quantum computing tasks using linear optical systems. An efficient route for measurement based quantum computing utilizes highly entangled states of photons, called cluster states. Propagation and processing quantum information is made possible this way using only single qubit measurements. It is highly resilient to qubit losses. In addition, single qubit measurements of polarization qubits is easily performed with high fidelity using standard optical tools. These features make photonic clusters excellent platforms for quantum information processing.
Constructing photonic cluster states, however, is a formidable challenge, attracting vast amounts of research efforts. While in principle it is possible to build up cluster states using interferometry, such a method is of a probabilistic nature and entails a large overhead of resources. The use of entangled photon pairs reduces this overhead by a small factor only.
We outline a novel route for constructing a deterministic source of photonic cluster states using a device based on semiconductor quantum dot. Our proposal follows a suggestion by Lindner and Rudolph. We use repeated optical excitations of a long lived coherent spin confined in a single semiconductor quantum dot and demonstrate for the first time practical realization of their proposal. Our preliminary demonstration presents a breakthrough in quantum technology since deterministic source of photonic cluster, reduces the resources needed quantum information processing. It may have revolutionary prospects for technological applications as well as to our fundamental understanding of quantum systems.
We propose to capitalize on this recent breakthrough and concentrate on R&D which will further advance this forefront field of science and technology by utilizing the horizons that it opens.
Summary
Measurement based quantum computing is one of the most fault-tolerant architectures proposed for quantum information processing. It opens the possibility of performing quantum computing tasks using linear optical systems. An efficient route for measurement based quantum computing utilizes highly entangled states of photons, called cluster states. Propagation and processing quantum information is made possible this way using only single qubit measurements. It is highly resilient to qubit losses. In addition, single qubit measurements of polarization qubits is easily performed with high fidelity using standard optical tools. These features make photonic clusters excellent platforms for quantum information processing.
Constructing photonic cluster states, however, is a formidable challenge, attracting vast amounts of research efforts. While in principle it is possible to build up cluster states using interferometry, such a method is of a probabilistic nature and entails a large overhead of resources. The use of entangled photon pairs reduces this overhead by a small factor only.
We outline a novel route for constructing a deterministic source of photonic cluster states using a device based on semiconductor quantum dot. Our proposal follows a suggestion by Lindner and Rudolph. We use repeated optical excitations of a long lived coherent spin confined in a single semiconductor quantum dot and demonstrate for the first time practical realization of their proposal. Our preliminary demonstration presents a breakthrough in quantum technology since deterministic source of photonic cluster, reduces the resources needed quantum information processing. It may have revolutionary prospects for technological applications as well as to our fundamental understanding of quantum systems.
We propose to capitalize on this recent breakthrough and concentrate on R&D which will further advance this forefront field of science and technology by utilizing the horizons that it opens.
Max ERC Funding
2 502 974 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym DIFFOP
Project Nonlinear Data and Signal Analysis with Diffusion Operators
Researcher (PI) Ronen TALMON
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Nowadays, extensive collection and storage of massive data sets have become a routine in multiple disciplines and in everyday life. These large amounts of intricate data often make data samples arithmetic and basic comparisons problematic, raising new challenges to traditional data analysis objectives such as filtering and prediction. Furthermore, the availability of such data constantly pushes the boundaries of data analysis to new emerging domains, ranging from neuronal and social network analysis to multimodal sensor fusion. The combination of evolved data and new domains drives a fundamental change in the field of data analysis. Indeed, many classical model-based techniques have become obsolete since their models do not embody the richness of the collected data. Today, one notable avenue of research is the development of nonlinear techniques that transition from data to creating representations, without deriving models in closed-form. The vast majority of such existing data-driven methods operate directly on the data, a hard task by itself when the data are large and elaborated. The goal of this research is to develop a fundamentally new methodology for high dimensional data analysis with diffusion operators, making use of recent transformative results in manifold and geometry learning. More concretely, shifting the focus from processing the data samples themselves and considering instead structured data through the lens of diffusion operators will introduce new powerful “handles” to data, capturing their complexity efficiently. We will study the basic theory behind this nonlinear analysis, develop new operators for this purpose, and devise efficient data-driven algorithms. In addition, we will explore how our approach can be leveraged for devising efficient solutions to a broad range of open real-world data analysis problems, involving intrinsic representations, sensor fusion, time-series analysis, network connectivity inference, and domain adaptation.
Summary
Nowadays, extensive collection and storage of massive data sets have become a routine in multiple disciplines and in everyday life. These large amounts of intricate data often make data samples arithmetic and basic comparisons problematic, raising new challenges to traditional data analysis objectives such as filtering and prediction. Furthermore, the availability of such data constantly pushes the boundaries of data analysis to new emerging domains, ranging from neuronal and social network analysis to multimodal sensor fusion. The combination of evolved data and new domains drives a fundamental change in the field of data analysis. Indeed, many classical model-based techniques have become obsolete since their models do not embody the richness of the collected data. Today, one notable avenue of research is the development of nonlinear techniques that transition from data to creating representations, without deriving models in closed-form. The vast majority of such existing data-driven methods operate directly on the data, a hard task by itself when the data are large and elaborated. The goal of this research is to develop a fundamentally new methodology for high dimensional data analysis with diffusion operators, making use of recent transformative results in manifold and geometry learning. More concretely, shifting the focus from processing the data samples themselves and considering instead structured data through the lens of diffusion operators will introduce new powerful “handles” to data, capturing their complexity efficiently. We will study the basic theory behind this nonlinear analysis, develop new operators for this purpose, and devise efficient data-driven algorithms. In addition, we will explore how our approach can be leveraged for devising efficient solutions to a broad range of open real-world data analysis problems, involving intrinsic representations, sensor fusion, time-series analysis, network connectivity inference, and domain adaptation.
Max ERC Funding
1 260 000 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym DIMENSION
Project High-Dimensional Phenomena and Convexity
Researcher (PI) Boaz Binyamin Klartag
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE1, ERC-2012-StG_20111012
Summary High-dimensional problems with a geometric flavor appear in quite a few branches of mathematics, mathematical physics and theoretical computer science. A priori, one would think that the diversity and the rapid increase of the number of configurations would make it impossible to formulate general, interesting theorems that apply to large classes of high-dimensional geometric objects. The underlying theme of the proposed project is that the contrary is often true. Mathematical developments of the last decades indicate that high dimensionality, when viewed correctly, may create remarkable order and simplicity, rather than complication. For example, Dvoretzky's theorem demonstrates that any high-dimensional convex body has nearly-Euclidean sections of a high dimension. Another example is the central limit theorem for convex bodies due to the PI, according to which any high-dimensional convex body has approximately Gaussian marginals. There are a number of strong motifs in high-dimensional geometry, such as the concentration of measure, which seem to compensate for the vast amount of different possibilities. Convexity is one of the ways in which to harness these motifs and thereby formulate clean, non-trivial theorems. The scientific goals of the project are to develop new methods for the study of convexity in high dimensions beyond the concentration of measure, to explore emerging connections with other fields of mathematics, and to solve the outstanding problems related to the distribution of volume in high-dimensional convex sets.
Summary
High-dimensional problems with a geometric flavor appear in quite a few branches of mathematics, mathematical physics and theoretical computer science. A priori, one would think that the diversity and the rapid increase of the number of configurations would make it impossible to formulate general, interesting theorems that apply to large classes of high-dimensional geometric objects. The underlying theme of the proposed project is that the contrary is often true. Mathematical developments of the last decades indicate that high dimensionality, when viewed correctly, may create remarkable order and simplicity, rather than complication. For example, Dvoretzky's theorem demonstrates that any high-dimensional convex body has nearly-Euclidean sections of a high dimension. Another example is the central limit theorem for convex bodies due to the PI, according to which any high-dimensional convex body has approximately Gaussian marginals. There are a number of strong motifs in high-dimensional geometry, such as the concentration of measure, which seem to compensate for the vast amount of different possibilities. Convexity is one of the ways in which to harness these motifs and thereby formulate clean, non-trivial theorems. The scientific goals of the project are to develop new methods for the study of convexity in high dimensions beyond the concentration of measure, to explore emerging connections with other fields of mathematics, and to solve the outstanding problems related to the distribution of volume in high-dimensional convex sets.
Max ERC Funding
998 000 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym DIRECTEDINFO
Project Investigating Directed Information
Researcher (PI) Haim Permuter
Host Institution (HI) BEN-GURION UNIVERSITY OF THE NEGEV
Call Details Starting Grant (StG), PE7, ERC-2013-StG
Summary This research investigates a new measure that arises in information theory
called directed information. Recent advances, including our preliminary results, shows that
directed information arises in communication as the maximum rate that can be transmitted reliably
in channels with feedback. The directed information is multi-letter expression and therefore very
hard to optimize or compute.
Our plan is first of all to find an efficient methodology for optimizing the measure using the
dynamic programming framework and convex optimization tools. As an important by-product of
finding the fundamental limits is finding coding schemes that achieves the limits. Second, we
plan to find new roles for directed information in communication, especially in networks with
bi-directional communication and in data compression with causal conditions. Third, encouraged by
a preliminary work on interpretation of directed information in economics and estimation theory,
we plan to show that directed information has interpretation in additional fields such as
statistical physics. We plan to show that there is duality relation between different fields with
causal constraints. Due to the duality insights and breakthroughs in one problem will lead to new
insights in other problems. Finally, we will apply directed information as a statistical
inference of causal dependence. We will show how to estimate and use the directed information
estimator to measure causal inference between two or more process. In particular, one of the
questions we plan to answer is the influence of industrial activities (e.g., $\text{CO}_2$
volumes) on the global warming.
Our main focus will be to develop a deeper understanding of the mathematical properties of
directed information, a process that is instrumental to each problem. Due to their theoretical
proximity and their interdisciplinary nature, progress in one problem will lead to new insights
in other problems. A common set of mathematical tools developed in
Summary
This research investigates a new measure that arises in information theory
called directed information. Recent advances, including our preliminary results, shows that
directed information arises in communication as the maximum rate that can be transmitted reliably
in channels with feedback. The directed information is multi-letter expression and therefore very
hard to optimize or compute.
Our plan is first of all to find an efficient methodology for optimizing the measure using the
dynamic programming framework and convex optimization tools. As an important by-product of
finding the fundamental limits is finding coding schemes that achieves the limits. Second, we
plan to find new roles for directed information in communication, especially in networks with
bi-directional communication and in data compression with causal conditions. Third, encouraged by
a preliminary work on interpretation of directed information in economics and estimation theory,
we plan to show that directed information has interpretation in additional fields such as
statistical physics. We plan to show that there is duality relation between different fields with
causal constraints. Due to the duality insights and breakthroughs in one problem will lead to new
insights in other problems. Finally, we will apply directed information as a statistical
inference of causal dependence. We will show how to estimate and use the directed information
estimator to measure causal inference between two or more process. In particular, one of the
questions we plan to answer is the influence of industrial activities (e.g., $\text{CO}_2$
volumes) on the global warming.
Our main focus will be to develop a deeper understanding of the mathematical properties of
directed information, a process that is instrumental to each problem. Due to their theoretical
proximity and their interdisciplinary nature, progress in one problem will lead to new insights
in other problems. A common set of mathematical tools developed in
Max ERC Funding
1 224 600 €
Duration
Start date: 2013-08-01, End date: 2019-07-31
Project acronym DLGAPS
Project Dynamics of Lie group actions on parameter spaces
Researcher (PI) Barak Weiss
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2011-StG_20101014
Summary There are many parallels between Lie group actions on homogeneous spaces and the action of $\SL_2(\R)$ and its subgroups on strata of translation or half-translation surfaces. I propose to investigate these two spaces in parallel, focusing on the dynamical
behavior, and more specifically, the description of orbit-closures.
I intend to utilize existing and emerging measure rigidity results, and to develop new topological
approaches. These should also shed light on the geometry and topology of the spaces. I propose to apply results concerning these spaces to the study of diophantine approximations (approximation on fractals), geometry of numbers (Minkowski's conjecture), interval exchanges, and rational billiards.
Summary
There are many parallels between Lie group actions on homogeneous spaces and the action of $\SL_2(\R)$ and its subgroups on strata of translation or half-translation surfaces. I propose to investigate these two spaces in parallel, focusing on the dynamical
behavior, and more specifically, the description of orbit-closures.
I intend to utilize existing and emerging measure rigidity results, and to develop new topological
approaches. These should also shed light on the geometry and topology of the spaces. I propose to apply results concerning these spaces to the study of diophantine approximations (approximation on fractals), geometry of numbers (Minkowski's conjecture), interval exchanges, and rational billiards.
Max ERC Funding
850 000 €
Duration
Start date: 2011-10-01, End date: 2016-09-30
Project acronym DMMCA
Project Discrete Mathematics: methods, challenges and applications
Researcher (PI) Noga Alon
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE1, ERC-2008-AdG
Summary Discrete Mathematics is a fundamental mathematical discipline as well as an essential component of many mathematical areas, and its study has experienced an impressive growth in recent years. Some of the main reasons for this growth are the broad applications of tools and techniques from extremal and probabilistic combinatorics in the rapid development of theoretical Computer Science, in the spectacular recent results in Additive Number Theory and in the study of basic questions in Information Theory. While in the past many of the basic combinatorial results were obtained mainly by ingenuity and detailed reasoning, the modern theory has grown out of this early stage, and often relies on deep, well developed tools, like the probabilistic method, algebraic, topological and geometric techniques. The work of the principal investigator, partly jointly with several collaborators and students, and partly in individual efforts, has played a significant role in the introduction of powerful algebraic, probabilistic, spectral and geometric techniques that influenced the development of modern combinatorics. In the present project he aims to try and further develop such tools, trying to tackle some basic open problems in Combinatorics, as well as significant questions in Additive Combinatorics, Information Theory, and theoretical Computer Science. Progress on the problems mentioned in this proposal, and the study of related ones, is expected to provide new insights on these problems and to lead to the development of novel fruitful techniques that are likely to be useful in Discrete Mathematics as well as in related areas.
Summary
Discrete Mathematics is a fundamental mathematical discipline as well as an essential component of many mathematical areas, and its study has experienced an impressive growth in recent years. Some of the main reasons for this growth are the broad applications of tools and techniques from extremal and probabilistic combinatorics in the rapid development of theoretical Computer Science, in the spectacular recent results in Additive Number Theory and in the study of basic questions in Information Theory. While in the past many of the basic combinatorial results were obtained mainly by ingenuity and detailed reasoning, the modern theory has grown out of this early stage, and often relies on deep, well developed tools, like the probabilistic method, algebraic, topological and geometric techniques. The work of the principal investigator, partly jointly with several collaborators and students, and partly in individual efforts, has played a significant role in the introduction of powerful algebraic, probabilistic, spectral and geometric techniques that influenced the development of modern combinatorics. In the present project he aims to try and further develop such tools, trying to tackle some basic open problems in Combinatorics, as well as significant questions in Additive Combinatorics, Information Theory, and theoretical Computer Science. Progress on the problems mentioned in this proposal, and the study of related ones, is expected to provide new insights on these problems and to lead to the development of novel fruitful techniques that are likely to be useful in Discrete Mathematics as well as in related areas.
Max ERC Funding
1 061 300 €
Duration
Start date: 2008-12-01, End date: 2013-11-30
Project acronym DPI
Project Deep Packet Inspection to Next Generation Network Devices
Researcher (PI) Anat Bremler-Barr
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Call Details Starting Grant (StG), PE7, ERC-2010-StG_20091028
Summary Deep packet inspection (DPI) lies at the core of contemporary Network Intrusion Detection/Prevention Systems and Web Application Firewall. DPI aims to identify various malware (including spam and viruses), by inspecting both the header and the payload of each packet and comparing it to a known set of patterns. DPI are often performed on the critical path of the packet processing, thus the overall performance of the security tools is dominated by the speed of DPI.
Traditionally, DPI considered only exact string patterns. However, in modern network devices patterns are often represented by regular expressions due to their superior expressiveness. Matching both exact string and regular expressions are well-studied area in Computer Science; however all well-known solutions are not sufficient for current network demands: First, current solutions do not scale in terms of speed, memory and power requirements. While current network devices work at 10-100 Gbps and have thousands of patterns, traditional solutions suffer from exponential memory size or exponential time and induce prohibitive power consumption. Second, non clear-text traffic, such as compressed traffic, becomes a dominant portion of the Internet and is clearly harder to inspect.
In this research we design new algorithms and schemes that cope with today demand. This is evolving area both in the Academia and Industry, where currently there is no adequate solution.
We intend to use recent advances in hardware to cope with these demanding requirements. More specifically, we plan to use Ternary Content-Addressable Memories (TCAMs), which become standard commodity in contemporary network devices. TCAMs can compare a key against all rules in a memory in parallel and thus provide high throughput. We believ
Summary
Deep packet inspection (DPI) lies at the core of contemporary Network Intrusion Detection/Prevention Systems and Web Application Firewall. DPI aims to identify various malware (including spam and viruses), by inspecting both the header and the payload of each packet and comparing it to a known set of patterns. DPI are often performed on the critical path of the packet processing, thus the overall performance of the security tools is dominated by the speed of DPI.
Traditionally, DPI considered only exact string patterns. However, in modern network devices patterns are often represented by regular expressions due to their superior expressiveness. Matching both exact string and regular expressions are well-studied area in Computer Science; however all well-known solutions are not sufficient for current network demands: First, current solutions do not scale in terms of speed, memory and power requirements. While current network devices work at 10-100 Gbps and have thousands of patterns, traditional solutions suffer from exponential memory size or exponential time and induce prohibitive power consumption. Second, non clear-text traffic, such as compressed traffic, becomes a dominant portion of the Internet and is clearly harder to inspect.
In this research we design new algorithms and schemes that cope with today demand. This is evolving area both in the Academia and Industry, where currently there is no adequate solution.
We intend to use recent advances in hardware to cope with these demanding requirements. More specifically, we plan to use Ternary Content-Addressable Memories (TCAMs), which become standard commodity in contemporary network devices. TCAMs can compare a key against all rules in a memory in parallel and thus provide high throughput. We believ
Max ERC Funding
990 400 €
Duration
Start date: 2010-11-01, End date: 2016-10-31
Project acronym DYNA-MIC
Project Deep non-invasive imaging via scattered-light acoustically-mediated computational microscopy
Researcher (PI) Ori Katz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE7, ERC-2015-STG
Summary Optical microscopy, perhaps the most important tool in biomedical investigation and clinical diagnostics, is currently held back by the assumption that it is not possible to noninvasively image microscopic structures more than a fraction of a millimeter deep inside tissue. The governing paradigm is that high-resolution information carried by light is lost due to random scattering in complex samples such as tissue. While non-optical imaging techniques, employing non-ionizing radiation such as ultrasound, allow deeper investigations, they possess drastically inferior resolution and do not permit microscopic studies of cellular structures, crucial for accurate diagnosis of cancer and other diseases.
I propose a new kind of microscope, one that can peer deep inside visually opaque samples, combining the sub-micron resolution of light with the penetration depth of ultrasound. My novel approach is based on our discovery that information on microscopic structures is contained in random scattered-light patterns. It breaks current limits by exploiting the randomness of scattered light rather than struggling to fight it.
We will transform this concept into a breakthrough imaging platform by combining ultrasonic probing and modulation of light with advanced digital signal processing algorithms, extracting the hidden microscopic structure by two complementary approaches: 1) By exploiting the stochastic dynamics of scattered light using methods developed to surpass the diffraction limit in optical nanoscopy and for compressive sampling, harnessing nonlinear effects. 2) Through the analysis of intrinsic correlations in scattered light that persist deep inside scattering tissue.
This proposal is formed by bringing together novel insights on the physics of light in complex media, advanced microscopy techniques, and ultrasound-mediated imaging. It is made possible by the new ability to digitally process vast amounts of scattering data, and has the potential to impact many fields.
Summary
Optical microscopy, perhaps the most important tool in biomedical investigation and clinical diagnostics, is currently held back by the assumption that it is not possible to noninvasively image microscopic structures more than a fraction of a millimeter deep inside tissue. The governing paradigm is that high-resolution information carried by light is lost due to random scattering in complex samples such as tissue. While non-optical imaging techniques, employing non-ionizing radiation such as ultrasound, allow deeper investigations, they possess drastically inferior resolution and do not permit microscopic studies of cellular structures, crucial for accurate diagnosis of cancer and other diseases.
I propose a new kind of microscope, one that can peer deep inside visually opaque samples, combining the sub-micron resolution of light with the penetration depth of ultrasound. My novel approach is based on our discovery that information on microscopic structures is contained in random scattered-light patterns. It breaks current limits by exploiting the randomness of scattered light rather than struggling to fight it.
We will transform this concept into a breakthrough imaging platform by combining ultrasonic probing and modulation of light with advanced digital signal processing algorithms, extracting the hidden microscopic structure by two complementary approaches: 1) By exploiting the stochastic dynamics of scattered light using methods developed to surpass the diffraction limit in optical nanoscopy and for compressive sampling, harnessing nonlinear effects. 2) Through the analysis of intrinsic correlations in scattered light that persist deep inside scattering tissue.
This proposal is formed by bringing together novel insights on the physics of light in complex media, advanced microscopy techniques, and ultrasound-mediated imaging. It is made possible by the new ability to digitally process vast amounts of scattering data, and has the potential to impact many fields.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym EffectiveTG
Project Effective Methods in Tame Geometry and Applications in Arithmetic and Dynamics
Researcher (PI) Gal BINYAMINI
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary Tame geometry studies structures in which every definable set has a
finite geometric complexity. The study of tame geometry spans several
interrelated mathematical fields, including semialgebraic,
subanalytic, and o-minimal geometry. The past decade has seen the
emergence of a spectacular link between tame geometry and arithmetic
following the discovery of the fundamental Pila-Wilkie counting
theorem and its applications in unlikely diophantine
intersections. The P-W theorem itself relies crucially on the
Yomdin-Gromov theorem, a classical result of tame geometry with
fundamental applications in smooth dynamics.
It is natural to ask whether the complexity of a tame set can be
estimated effectively in terms of the defining formulas. While a large
body of work is devoted to answering such questions in the
semialgebraic case, surprisingly little is known concerning more
general tame structures - specifically those needed in recent
applications to arithmetic. The nature of the link between tame
geometry and arithmetic is such that any progress toward effectivizing
the theory of tame structures will likely lead to effective results
in the domain of unlikely intersections. Similarly, a more effective
version of the Yomdin-Gromov theorem is known to imply important
consequences in smooth dynamics.
The proposed research will approach effectivity in tame geometry from
a fundamentally new direction, bringing to bear methods from the
theory of differential equations which have until recently never been
used in this context. Toward this end, our key goals will be to gain
insight into the differential algebraic and complex analytic structure
of tame sets; and to apply this insight in combination with results
from the theory of differential equations to effectivize key results
in tame geometry and its applications to arithmetic and dynamics. I
believe that my preliminary work in this direction amply demonstrates
the feasibility and potential of this approach.
Summary
Tame geometry studies structures in which every definable set has a
finite geometric complexity. The study of tame geometry spans several
interrelated mathematical fields, including semialgebraic,
subanalytic, and o-minimal geometry. The past decade has seen the
emergence of a spectacular link between tame geometry and arithmetic
following the discovery of the fundamental Pila-Wilkie counting
theorem and its applications in unlikely diophantine
intersections. The P-W theorem itself relies crucially on the
Yomdin-Gromov theorem, a classical result of tame geometry with
fundamental applications in smooth dynamics.
It is natural to ask whether the complexity of a tame set can be
estimated effectively in terms of the defining formulas. While a large
body of work is devoted to answering such questions in the
semialgebraic case, surprisingly little is known concerning more
general tame structures - specifically those needed in recent
applications to arithmetic. The nature of the link between tame
geometry and arithmetic is such that any progress toward effectivizing
the theory of tame structures will likely lead to effective results
in the domain of unlikely intersections. Similarly, a more effective
version of the Yomdin-Gromov theorem is known to imply important
consequences in smooth dynamics.
The proposed research will approach effectivity in tame geometry from
a fundamentally new direction, bringing to bear methods from the
theory of differential equations which have until recently never been
used in this context. Toward this end, our key goals will be to gain
insight into the differential algebraic and complex analytic structure
of tame sets; and to apply this insight in combination with results
from the theory of differential equations to effectivize key results
in tame geometry and its applications to arithmetic and dynamics. I
believe that my preliminary work in this direction amply demonstrates
the feasibility and potential of this approach.
Max ERC Funding
1 155 027 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym EMERGE
Project Reconstructing the emergence of the Milky Way’s stellar population with Gaia, SDSS-V and JWST
Researcher (PI) Dan Maoz
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE9, ERC-2018-ADG
Summary Understanding how the Milky Way arrived at its present state requires a large volume of precision measurements of our Galaxy’s current makeup, as well as an empirically based understanding of the main processes involved in the Galaxy’s evolution. Such data are now about to arrive in the flood of quality information from Gaia and SDSS-V. The demography of the stars and of the compact stellar remnants in our Galaxy, in terms of phase-space location, mass, age, metallicity, and multiplicity are data products that will come directly from these surveys. I propose to integrate this information into a comprehensive picture of the Milky Way’s present state. In parallel, I will build a Galactic chemical evolution model, with input parameters that are as empirically based as possible, that will reproduce and explain the observations. To get those input parameters, I will measure the rates of supernovae (SNe) in nearby galaxies (using data from past and ongoing surveys) and in high-redshift proto-clusters (by conducting a SN search with JWST), to bring into sharp focus the element yields of SNe and the distribution of delay times (the DTD) between star formation and SN explosion. These empirically determined SN metal-production parameters will be used to find the observationally based reconstruction of the Galaxy’s stellar formation history and chemical evolution that reproduces the observed present-day Milky Way stellar population. The population census of stellar multiplicity with Gaia+SDSS-V, and particularly of short-orbit compact-object binaries, will hark back to the rates and the element yields of the various types of SNe, revealing the connections between various progenitor systems, their explosions, and their rates. The plan, while ambitious, is feasible, thanks to the data from these truly game-changing observational projects. My team will perform all steps of the analysis and will combine the results to obtain the clearest picture of how our Galaxy came to be.
Summary
Understanding how the Milky Way arrived at its present state requires a large volume of precision measurements of our Galaxy’s current makeup, as well as an empirically based understanding of the main processes involved in the Galaxy’s evolution. Such data are now about to arrive in the flood of quality information from Gaia and SDSS-V. The demography of the stars and of the compact stellar remnants in our Galaxy, in terms of phase-space location, mass, age, metallicity, and multiplicity are data products that will come directly from these surveys. I propose to integrate this information into a comprehensive picture of the Milky Way’s present state. In parallel, I will build a Galactic chemical evolution model, with input parameters that are as empirically based as possible, that will reproduce and explain the observations. To get those input parameters, I will measure the rates of supernovae (SNe) in nearby galaxies (using data from past and ongoing surveys) and in high-redshift proto-clusters (by conducting a SN search with JWST), to bring into sharp focus the element yields of SNe and the distribution of delay times (the DTD) between star formation and SN explosion. These empirically determined SN metal-production parameters will be used to find the observationally based reconstruction of the Galaxy’s stellar formation history and chemical evolution that reproduces the observed present-day Milky Way stellar population. The population census of stellar multiplicity with Gaia+SDSS-V, and particularly of short-orbit compact-object binaries, will hark back to the rates and the element yields of the various types of SNe, revealing the connections between various progenitor systems, their explosions, and their rates. The plan, while ambitious, is feasible, thanks to the data from these truly game-changing observational projects. My team will perform all steps of the analysis and will combine the results to obtain the clearest picture of how our Galaxy came to be.
Max ERC Funding
1 859 375 €
Duration
Start date: 2019-10-01, End date: 2024-09-30
Project acronym ErgComNum
Project Ergodic theory and additive combinatorics
Researcher (PI) Tamar Ziegler
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE1, ERC-2015-CoG
Summary The last decade has witnessed a new spring for dynamical systems. The field - initiated by Poincare in the study of the N-body problem - has become essential in the understanding of seemingly far off fields such as combinatorics, number theory and theoretical computer science. In particular, ideas from ergodic theory played an important role in the resolution of long standing open problems in combinatorics and number theory. A striking example is the role of dynamics on nilmanifolds in the recent proof of Hardy-Littlewood estimates for the number of solutions to systems of linear equations of finite complexity in the prime numbers. The interplay between ergodic theory, number theory and additive combinatorics has proved very fruitful; it is a fast growing area in mathematics attracting many young researchers. We propose to tackle central open problems in the area.
Summary
The last decade has witnessed a new spring for dynamical systems. The field - initiated by Poincare in the study of the N-body problem - has become essential in the understanding of seemingly far off fields such as combinatorics, number theory and theoretical computer science. In particular, ideas from ergodic theory played an important role in the resolution of long standing open problems in combinatorics and number theory. A striking example is the role of dynamics on nilmanifolds in the recent proof of Hardy-Littlewood estimates for the number of solutions to systems of linear equations of finite complexity in the prime numbers. The interplay between ergodic theory, number theory and additive combinatorics has proved very fruitful; it is a fast growing area in mathematics attracting many young researchers. We propose to tackle central open problems in the area.
Max ERC Funding
1 342 500 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym ERGODICNONCOMPACT
Project Ergodic theory on non compact spaces
Researcher (PI) Omri Moshe Sarig
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE1, ERC-2009-StG
Summary The proposal is to look for, and investigate, new ergodic theoretic types of behavior for dynamical systems which act on non compact spaces. These could include transience and non-trivial ways of escape to infinity, critical phenomena similar to phase transitions, and new types of measure rigidity. There are potential applications to smooth ergodic theory (non-uniform hyperbolicity), algebraic ergodic theory (actions on homogeneous spaces), and probability theory (weakly dependent stochastic processes).
Summary
The proposal is to look for, and investigate, new ergodic theoretic types of behavior for dynamical systems which act on non compact spaces. These could include transience and non-trivial ways of escape to infinity, critical phenomena similar to phase transitions, and new types of measure rigidity. There are potential applications to smooth ergodic theory (non-uniform hyperbolicity), algebraic ergodic theory (actions on homogeneous spaces), and probability theory (weakly dependent stochastic processes).
Max ERC Funding
539 479 €
Duration
Start date: 2009-10-01, End date: 2014-09-30
Project acronym ETASECS
Project Extremely Thin Absorbers for Solar Energy Conversion and Storage
Researcher (PI) Avner Rothschild
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Consolidator Grant (CoG), PE8, ERC-2013-CoG
Summary ETASECS aims at making a breakthrough in the development of photoelectrochemical (PEC) cells for solar-powered water splitting that can be readily integrated with PV cells to provide storage capacity in the form of hydrogen. It builds upon our recent invention for resonant light trapping in ultrathin films of iron oxide (a-Fe2O3), which enables overcoming the deleterious trade-off between light absorption and charge carrier collection efficiency. Although we recently broke the water photo-oxidation record by any a-Fe2O3 photoanode reported to date, the losses are still high and there is plenty of room for further improvements that will lead to a remakable enhancement in the performance of our photoanodes, reaching quantum efficiency level similar to state-of-the-art PV cells. ETASECS aims at reaching this ambitious goal, which is essential for demonstrating the competitiveness of PEC+PV tandem systems for solar energy conversion and storage. Towards this end WP1 will combine theory, modelling and simulations, state-of-the-art experimental methods and advanced diagnostic techniques in order to identify and quantify the different losses in our devices. This work will guide the optimization work in WP2 that will suppress the losses at the photoanode and insure optimal electrical and optical coupling of the PEC and PV cells. We will also explore advanced photon management schemes that will go beyond our current light trapping scheme by combining synergic optical and nanophotonics effects. WP3 will integrate the PEC and PV cells and test their properties and performance. WP4 will disseminate our progress and achievements in professional and public forums. The innovations that will emerge from this frontier research will be further pursued in proof of concept follow up investigations that will demonstrate the feasibility of this technology. Success along these lines holds exciting promises for ground breaking progress towards large scale deployment of solar energy.
Summary
ETASECS aims at making a breakthrough in the development of photoelectrochemical (PEC) cells for solar-powered water splitting that can be readily integrated with PV cells to provide storage capacity in the form of hydrogen. It builds upon our recent invention for resonant light trapping in ultrathin films of iron oxide (a-Fe2O3), which enables overcoming the deleterious trade-off between light absorption and charge carrier collection efficiency. Although we recently broke the water photo-oxidation record by any a-Fe2O3 photoanode reported to date, the losses are still high and there is plenty of room for further improvements that will lead to a remakable enhancement in the performance of our photoanodes, reaching quantum efficiency level similar to state-of-the-art PV cells. ETASECS aims at reaching this ambitious goal, which is essential for demonstrating the competitiveness of PEC+PV tandem systems for solar energy conversion and storage. Towards this end WP1 will combine theory, modelling and simulations, state-of-the-art experimental methods and advanced diagnostic techniques in order to identify and quantify the different losses in our devices. This work will guide the optimization work in WP2 that will suppress the losses at the photoanode and insure optimal electrical and optical coupling of the PEC and PV cells. We will also explore advanced photon management schemes that will go beyond our current light trapping scheme by combining synergic optical and nanophotonics effects. WP3 will integrate the PEC and PV cells and test their properties and performance. WP4 will disseminate our progress and achievements in professional and public forums. The innovations that will emerge from this frontier research will be further pursued in proof of concept follow up investigations that will demonstrate the feasibility of this technology. Success along these lines holds exciting promises for ground breaking progress towards large scale deployment of solar energy.
Max ERC Funding
2 150 000 €
Duration
Start date: 2014-09-01, End date: 2019-08-31
Project acronym EXPANDERS
Project Expander Graphs in Pure and Applied Mathematics
Researcher (PI) Alexander Lubotzky
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2008-AdG
Summary Expander graphs are finite graphs which play a fundamental role in many areas of computer science such as: communication networks, algorithms and more. Several areas of deep mathematics have been used in order to give explicit constructions of such graphs e.g. Kazhdan property (T) from representation theory of semisimple Lie groups, Ramanujan conjecture from the theory of automorphic forms and more. In recent years, computer science has started to pay its debt to mathematics: expander graphs are playing an increasing role in several areas of pure mathematics. The goal of the current research plan is to deepen these connections in both directions with special emphasis of the more recent and surprising application of expanders to group theory, the geometry of 3-manifolds and number theory.
Summary
Expander graphs are finite graphs which play a fundamental role in many areas of computer science such as: communication networks, algorithms and more. Several areas of deep mathematics have been used in order to give explicit constructions of such graphs e.g. Kazhdan property (T) from representation theory of semisimple Lie groups, Ramanujan conjecture from the theory of automorphic forms and more. In recent years, computer science has started to pay its debt to mathematics: expander graphs are playing an increasing role in several areas of pure mathematics. The goal of the current research plan is to deepen these connections in both directions with special emphasis of the more recent and surprising application of expanders to group theory, the geometry of 3-manifolds and number theory.
Max ERC Funding
1 082 504 €
Duration
Start date: 2008-10-01, End date: 2014-09-30
Project acronym EXQFT
Project Exact Results in Quantum Field Theory
Researcher (PI) Zohar Komargodski
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE2, ERC-2013-StG
Summary Quantum field theory (QFT) is a unified conceptual and mathematical framework that encompasses a veritable cornucopia of physical phenomena, including phase transitions, condensed matter systems, elementary particle physics, and (via the holographic principle) quantum gravity. QFT has become the standard language of modern theoretical physics.
Despite the fact that QFT is omnipresent in physics, we have virtually no tools to analyze from first principles many of the interesting systems that appear in nature. (For instance, Quantum Chromodynamics, non-Fermi liquids, and even boiling water.)
Our main goal in this proposal is to develop new tools that would allow us to make progress on this fundamental problem. To this end, we will employ two strategies.
First, we propose to study in detail systems that possess extra symmetries (and are hence simpler). For example, critical systems often admit the group of conformal transformations. Another example is given by theories with Bose-Fermi degeneracy (supersymmetric theories). We will explain how we think significant progress can be achieved in this area. Advances here will allow us to wield more analytic control over relatively simple QFTs and extract physical information from these models. Such information can be useful in many areas of physics and lead to new connections with mathematics. Second, we will study general properties of renormalization group flows. Renormalization group flows govern the dynamics of QFT and understanding their properties may lead to substantial developments. Very recent progress along these lines has already led to surprising new results about QFT and may have direct applications in several areas of physics. Much more can be achieved.
These two strategies are complementary and interwoven.
Summary
Quantum field theory (QFT) is a unified conceptual and mathematical framework that encompasses a veritable cornucopia of physical phenomena, including phase transitions, condensed matter systems, elementary particle physics, and (via the holographic principle) quantum gravity. QFT has become the standard language of modern theoretical physics.
Despite the fact that QFT is omnipresent in physics, we have virtually no tools to analyze from first principles many of the interesting systems that appear in nature. (For instance, Quantum Chromodynamics, non-Fermi liquids, and even boiling water.)
Our main goal in this proposal is to develop new tools that would allow us to make progress on this fundamental problem. To this end, we will employ two strategies.
First, we propose to study in detail systems that possess extra symmetries (and are hence simpler). For example, critical systems often admit the group of conformal transformations. Another example is given by theories with Bose-Fermi degeneracy (supersymmetric theories). We will explain how we think significant progress can be achieved in this area. Advances here will allow us to wield more analytic control over relatively simple QFTs and extract physical information from these models. Such information can be useful in many areas of physics and lead to new connections with mathematics. Second, we will study general properties of renormalization group flows. Renormalization group flows govern the dynamics of QFT and understanding their properties may lead to substantial developments. Very recent progress along these lines has already led to surprising new results about QFT and may have direct applications in several areas of physics. Much more can be achieved.
These two strategies are complementary and interwoven.
Max ERC Funding
1 158 692 €
Duration
Start date: 2013-09-01, End date: 2018-08-31
Project acronym EXTPRO
Project Quasi-Randomness in Extremal Combinatorics
Researcher (PI) Asaf Shapira
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2014-STG
Summary Combinatorics is an extremely fast growing mathematical discipline. While it started as a collection of isolated problems that
were tackled using ad-hoc arguments it has since grown into a mature discipline which both incorporated into it deep tools from other mathematical areas, and has also found applications in other mathematical areas such as Additive Number Theory, Theoretical Computer Science, Computational Biology and Information Theory.
The PI will work on a variety of problems in Extremal Combinatorics which is one of the most active subareas within Combinatorics with spectacular recent developments. A typical problem in this area asks to minimize (or maximize) a certain parameter attached to a discrete structure given several other constrains. One of the most powerful tools used in attacking problems in this area uses the so called Structure vs Randomness phenomenon. This roughly means that any {\em deterministic} object can be partitioned into smaller quasi-random objects, that is, objects that have properties we expect to find in truly random ones. The PI has already made significant contributions in this area and our goal in this proposal is to obtain further results of this caliber by tackling some of the hardest open problems at the forefront of current research. Some of these problems are related to the celebrated Hypergraph and Arithmetic Regularity Lemmas, to Super-saturation problems in Additive Combinatorics and Graph Theory, to problems in Ramsey Theory, as well as to applications of Extremal Combinatorics to problems in Theoretical Computer Science. Another major goal of this proposal is to develop new approaches and techniques for tackling problems in Extremal Combinatorics.
The support by means of a 5-year research grant will enable the PI to further establish himself as a leading researcher in Extremal Combinatorics and to build a vibrant research group in Extremal Combinatorics.
Summary
Combinatorics is an extremely fast growing mathematical discipline. While it started as a collection of isolated problems that
were tackled using ad-hoc arguments it has since grown into a mature discipline which both incorporated into it deep tools from other mathematical areas, and has also found applications in other mathematical areas such as Additive Number Theory, Theoretical Computer Science, Computational Biology and Information Theory.
The PI will work on a variety of problems in Extremal Combinatorics which is one of the most active subareas within Combinatorics with spectacular recent developments. A typical problem in this area asks to minimize (or maximize) a certain parameter attached to a discrete structure given several other constrains. One of the most powerful tools used in attacking problems in this area uses the so called Structure vs Randomness phenomenon. This roughly means that any {\em deterministic} object can be partitioned into smaller quasi-random objects, that is, objects that have properties we expect to find in truly random ones. The PI has already made significant contributions in this area and our goal in this proposal is to obtain further results of this caliber by tackling some of the hardest open problems at the forefront of current research. Some of these problems are related to the celebrated Hypergraph and Arithmetic Regularity Lemmas, to Super-saturation problems in Additive Combinatorics and Graph Theory, to problems in Ramsey Theory, as well as to applications of Extremal Combinatorics to problems in Theoretical Computer Science. Another major goal of this proposal is to develop new approaches and techniques for tackling problems in Extremal Combinatorics.
The support by means of a 5-year research grant will enable the PI to further establish himself as a leading researcher in Extremal Combinatorics and to build a vibrant research group in Extremal Combinatorics.
Max ERC Funding
1 221 921 €
Duration
Start date: 2015-03-01, End date: 2021-02-28
Project acronym FACT
Project Factorizing the wave function of large quantum systems
Researcher (PI) Eberhard Gross
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE3, ERC-2017-ADG
Summary This proposal puts forth a novel strategy to tackle large quantum systems. A variety of highly sophisticated methods such as quantum Monte Carlo, configuration interaction, coupled cluster, tensor networks, Feynman diagrams, dynamical mean-field theory, density functional theory, and semi-classical techniques have been developed to deal with the enormous complexity of the many-particle Schrödinger equation. The goal of our proposal is not to add another method to these standard techniques but, instead, we develop a systematic way of combining them. The essential ingredient is a novel way of decomposing the wave function without approximation into factors that describe subsystems of the full quantum system. This so-called exact factorization is asymmetric. In the case of two subsystems, one factor is a wave function satisfying a regular Schrödinger equation, while the other factor is a conditional probability amplitude satisfying a more complicated Schrödinger-like equation with a non-local, non-linear and non-Hermitian “Hamiltonian”. Since each subsystem is necessarily smaller than the full system, the above standard techniques can be applied more efficiently and, most importantly, different standard techniques can be applied to different subsystems. The power of the exact factorization lies in its versatility. Here we apply the technique to five different scenarios: The first two deal with non-adiabatic effects in (i) molecules and (ii) solids. Here the natural subsystems are electrons and nuclei. The third scenario deals with nuclear motion in (iii) molecules attached to semi-infinite metallic leads, requiring three subsystems: the electrons, the nuclei in the leads which ultimately reduce to a phonon bath, and the molecular nuclei which may perform large-amplitude movements, such as current-induced isomerization, (iv) purely electronic correlations, and (v) the interaction of matter with the quantized electromagnetic field, i.e., electrons, nuclei and photons.
Summary
This proposal puts forth a novel strategy to tackle large quantum systems. A variety of highly sophisticated methods such as quantum Monte Carlo, configuration interaction, coupled cluster, tensor networks, Feynman diagrams, dynamical mean-field theory, density functional theory, and semi-classical techniques have been developed to deal with the enormous complexity of the many-particle Schrödinger equation. The goal of our proposal is not to add another method to these standard techniques but, instead, we develop a systematic way of combining them. The essential ingredient is a novel way of decomposing the wave function without approximation into factors that describe subsystems of the full quantum system. This so-called exact factorization is asymmetric. In the case of two subsystems, one factor is a wave function satisfying a regular Schrödinger equation, while the other factor is a conditional probability amplitude satisfying a more complicated Schrödinger-like equation with a non-local, non-linear and non-Hermitian “Hamiltonian”. Since each subsystem is necessarily smaller than the full system, the above standard techniques can be applied more efficiently and, most importantly, different standard techniques can be applied to different subsystems. The power of the exact factorization lies in its versatility. Here we apply the technique to five different scenarios: The first two deal with non-adiabatic effects in (i) molecules and (ii) solids. Here the natural subsystems are electrons and nuclei. The third scenario deals with nuclear motion in (iii) molecules attached to semi-infinite metallic leads, requiring three subsystems: the electrons, the nuclei in the leads which ultimately reduce to a phonon bath, and the molecular nuclei which may perform large-amplitude movements, such as current-induced isomerization, (iv) purely electronic correlations, and (v) the interaction of matter with the quantized electromagnetic field, i.e., electrons, nuclei and photons.
Max ERC Funding
2 443 932 €
Duration
Start date: 2019-09-01, End date: 2024-08-31
Project acronym FADER
Project Flight Algorithms for Disaggregated Space Architectures
Researcher (PI) Pinchas Pini Gurfil
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE7, ERC-2011-StG_20101014
Summary Standard spacecraft designs comprise modules assembled in a single monolithic structure. When unexpected situations occur, the spacecraft are unable to adequately respond, and significant functional and financial losses are unavoidable. For instance, if the payload of a spacecraft fails, the whole system becomes unserviceable and substitution of the entire spacecraft is required. It would be much easier to replace the payload module only than launch a completely new satellite. This idea gives rise to an emerging concept in space engineering termed disaggregated spacecraft. Disaggregated space architectures (DSA) consist of several physically-separated modules, interacting through wireless communication links to form a single virtual platform. Each module has one or more pre-determined functions: Navigation, attitude control, power generation and payload operation. The free-flying modules, capable of resource sharing, do not have to operate in a tightly-controlled formation, but are rather required to remain in bounded relative position and attitude, termed cluster flying. DSA enables novel space system architectures, which are expected to be much more efficient, adaptable, robust and responsive. The main goal of the proposed research is to develop beyond the state-of-the-art technologies in order to enable operational flight of DSA, by (i) developing algorithms for semi-autonomous long-duration maintenance of a cluster and cluster network, capable of adding and removing spacecraft modules to/from the cluster and cluster network; (ii) finding methods so as to autonomously reconfigure the cluster to retain safety- and mission-critical functionality in the face of network degradation or component failures; (iii) designing semi-autonomous cluster scatter and re-gather maneuvesr to rapidly evade a debris-like threat; and (iv) validating the said algorithms and methods in the Distributed Space Systems Laboratory in which the PI serves as a Principal Investigator.
Summary
Standard spacecraft designs comprise modules assembled in a single monolithic structure. When unexpected situations occur, the spacecraft are unable to adequately respond, and significant functional and financial losses are unavoidable. For instance, if the payload of a spacecraft fails, the whole system becomes unserviceable and substitution of the entire spacecraft is required. It would be much easier to replace the payload module only than launch a completely new satellite. This idea gives rise to an emerging concept in space engineering termed disaggregated spacecraft. Disaggregated space architectures (DSA) consist of several physically-separated modules, interacting through wireless communication links to form a single virtual platform. Each module has one or more pre-determined functions: Navigation, attitude control, power generation and payload operation. The free-flying modules, capable of resource sharing, do not have to operate in a tightly-controlled formation, but are rather required to remain in bounded relative position and attitude, termed cluster flying. DSA enables novel space system architectures, which are expected to be much more efficient, adaptable, robust and responsive. The main goal of the proposed research is to develop beyond the state-of-the-art technologies in order to enable operational flight of DSA, by (i) developing algorithms for semi-autonomous long-duration maintenance of a cluster and cluster network, capable of adding and removing spacecraft modules to/from the cluster and cluster network; (ii) finding methods so as to autonomously reconfigure the cluster to retain safety- and mission-critical functionality in the face of network degradation or component failures; (iii) designing semi-autonomous cluster scatter and re-gather maneuvesr to rapidly evade a debris-like threat; and (iv) validating the said algorithms and methods in the Distributed Space Systems Laboratory in which the PI serves as a Principal Investigator.
Max ERC Funding
1 500 000 €
Duration
Start date: 2011-10-01, End date: 2016-09-30
Project acronym FAFC
Project Foundations and Applications of Functional Cryptography
Researcher (PI) Gil SEGEV
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary "Modern cryptography has successfully followed an ""all-or-nothing"" design paradigm over the years. For example, the most fundamental task of data encryption requires that encrypted data be fully recoverable using the encryption key, but be completely useless without it. Nowadays, however, this paradigm is insufficient for a wide variety of evolving applications, and a more subtle approach is urgently needed. This has recently motivated the cryptography community to put forward a vision of ""functional cryptography'': Designing cryptographic primitives that allow fine-grained access to sensitive data.
This proposal aims at making substantial progress towards realizing the premise of functional cryptography. By tackling challenging key problems in both the foundations and the applications of functional cryptography, I plan to direct the majority of our effort towards addressing the following three fundamental objectives, which span a broad and interdisciplinary flavor of research directions: (1) Obtain a better understanding of functional cryptography's building blocks, (2) develop functional cryptographic tools and schemes based on well-studied assumptions, and (3) increase the usability of functional cryptographic systems via algorithmic techniques.
Realizing the premise of functional cryptography is of utmost importance not only to the development of modern cryptography, but in fact to our entire technological development, where fine-grained access to sensitive data plays an instrumental role. Moreover, our objectives are tightly related to two of the most fundamental open problems in cryptography: Basing cryptography on widely-believed worst-case complexity assumptions, and basing public-key cryptography on private-key primitives. I strongly believe that meaningful progress towards achieving our objectives will shed new light on these key problems, and thus have a significant impact on our understanding of modern cryptography."
Summary
"Modern cryptography has successfully followed an ""all-or-nothing"" design paradigm over the years. For example, the most fundamental task of data encryption requires that encrypted data be fully recoverable using the encryption key, but be completely useless without it. Nowadays, however, this paradigm is insufficient for a wide variety of evolving applications, and a more subtle approach is urgently needed. This has recently motivated the cryptography community to put forward a vision of ""functional cryptography'': Designing cryptographic primitives that allow fine-grained access to sensitive data.
This proposal aims at making substantial progress towards realizing the premise of functional cryptography. By tackling challenging key problems in both the foundations and the applications of functional cryptography, I plan to direct the majority of our effort towards addressing the following three fundamental objectives, which span a broad and interdisciplinary flavor of research directions: (1) Obtain a better understanding of functional cryptography's building blocks, (2) develop functional cryptographic tools and schemes based on well-studied assumptions, and (3) increase the usability of functional cryptographic systems via algorithmic techniques.
Realizing the premise of functional cryptography is of utmost importance not only to the development of modern cryptography, but in fact to our entire technological development, where fine-grained access to sensitive data plays an instrumental role. Moreover, our objectives are tightly related to two of the most fundamental open problems in cryptography: Basing cryptography on widely-believed worst-case complexity assumptions, and basing public-key cryptography on private-key primitives. I strongly believe that meaningful progress towards achieving our objectives will shed new light on these key problems, and thus have a significant impact on our understanding of modern cryptography."
Max ERC Funding
1 307 188 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym FAST FILTERING
Project Fast Filtering for Computer Graphics, Vision and Computational Sciences
Researcher (PI) Raanan Fattal
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE6, ERC-2013-StG
Summary The world of digital signal processing, in particular computer graphics, vision and image processing, use linear and non-linear, explicit and implicit filtering extensively to analyze, process and synthesize images. Given nowadays high-resolution sensors, these operations are often very time consuming and are limited to devices with high-CPU power.
Traditional linear translation-invariant (LTI) transformations, executed using convolution, requires O(N^2) operations. This can be lowered to O(N \log N) via FFT over suitable domains. There are very few sets of filters to which optimal, linear-time, procedures are known. This situation is more complicated in the newly-emerging domain of non-linear spatially-varying filters. Exact application of such filter requires O(N^2) operations and acceleration methods involve higher space dimension introducing severe memory cost and truncation errors.
In this research proposal we intend to derive fast, linear-time, procedures for different types of LTI filters by exploiting a deep connection between convolution, spatially-homogeneous elliptic equations and the multigrid method for solving such equations. Based on this circular connection we draw novel prospects for deriving new multiscale filtering procedures.
A second part of this research proposal is devoted to deriving efficient explicit and implicit non-linear spatially-varying edge-aware filters. One front consists of the derivation of novel multi-level image decomposition that mimics the action of inhomogeneous diffusion operators. The idea here is, once again, to bridge the gap with numerical analysis and use ideas from multiscale matrix preconditioning for the design of new biorthogonal second-generation wavelets.
Moreover, this proposal outlines a new multiscale preconditioning paradigm combining ideas from algebraic multigrid and combinatorial matrix preconditioning. This intermediate approach offers new ways for overcoming fundamental shortcomings in this domain.
Summary
The world of digital signal processing, in particular computer graphics, vision and image processing, use linear and non-linear, explicit and implicit filtering extensively to analyze, process and synthesize images. Given nowadays high-resolution sensors, these operations are often very time consuming and are limited to devices with high-CPU power.
Traditional linear translation-invariant (LTI) transformations, executed using convolution, requires O(N^2) operations. This can be lowered to O(N \log N) via FFT over suitable domains. There are very few sets of filters to which optimal, linear-time, procedures are known. This situation is more complicated in the newly-emerging domain of non-linear spatially-varying filters. Exact application of such filter requires O(N^2) operations and acceleration methods involve higher space dimension introducing severe memory cost and truncation errors.
In this research proposal we intend to derive fast, linear-time, procedures for different types of LTI filters by exploiting a deep connection between convolution, spatially-homogeneous elliptic equations and the multigrid method for solving such equations. Based on this circular connection we draw novel prospects for deriving new multiscale filtering procedures.
A second part of this research proposal is devoted to deriving efficient explicit and implicit non-linear spatially-varying edge-aware filters. One front consists of the derivation of novel multi-level image decomposition that mimics the action of inhomogeneous diffusion operators. The idea here is, once again, to bridge the gap with numerical analysis and use ideas from multiscale matrix preconditioning for the design of new biorthogonal second-generation wavelets.
Moreover, this proposal outlines a new multiscale preconditioning paradigm combining ideas from algebraic multigrid and combinatorial matrix preconditioning. This intermediate approach offers new ways for overcoming fundamental shortcomings in this domain.
Max ERC Funding
1 320 200 €
Duration
Start date: 2013-08-01, End date: 2018-07-31
Project acronym FDP-MBH
Project Fundamental dynamical processes near massive black holes in galactic nuclei
Researcher (PI) Tal Alexander
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE7, ERC-2007-StG
Summary "I propose to combine analytical studies and simulations to explore fundamental open questions in the dynamics and statistical mechanics of stars near massive black holes. These directly affect key issues such as the rate of supply of single and binary stars to the black hole, the growth and evolution of single and binary massive black holes and the connections to the evolution of the host galaxy, capture of stars around the black hole, the rate and modes of gravitational wave emission from captured compact objects, stellar tidal heating and destruction, and the emergence of ""exotic"" stellar populations around massive black holes. These processes have immediate observational implications and relevance in view of the huge amounts of data on massive black holes and galactic nuclei coming from earth-bound and space-borne telescopes, from across the electromagnetic spectrum, from cosmic rays, and in the near future also from neutrinos and gravitational waves."
Summary
"I propose to combine analytical studies and simulations to explore fundamental open questions in the dynamics and statistical mechanics of stars near massive black holes. These directly affect key issues such as the rate of supply of single and binary stars to the black hole, the growth and evolution of single and binary massive black holes and the connections to the evolution of the host galaxy, capture of stars around the black hole, the rate and modes of gravitational wave emission from captured compact objects, stellar tidal heating and destruction, and the emergence of ""exotic"" stellar populations around massive black holes. These processes have immediate observational implications and relevance in view of the huge amounts of data on massive black holes and galactic nuclei coming from earth-bound and space-borne telescopes, from across the electromagnetic spectrum, from cosmic rays, and in the near future also from neutrinos and gravitational waves."
Max ERC Funding
880 000 €
Duration
Start date: 2008-09-01, End date: 2013-08-31
Project acronym FIELDGRADIENTS
Project Phase Transitions and Chemical Reactions in Electric Field Gradients
Researcher (PI) Yoav Tsori
Host Institution (HI) BEN-GURION UNIVERSITY OF THE NEGEV
Call Details Starting Grant (StG), PE3, ERC-2010-StG_20091028
Summary We will study phase transitions and chemical and biological reactions in liquid mixtures
in electric field gradients. These new phase transitions are essential in statistical
physics and thermodynamics. We will examine theoretically the complex and yet unexplored
phase ordering dynamics in which droplets nucleate and move under the external nonuniform
force. We will look in detail at the interfacial instabilities which develop when the
field is increased. We will investigate how time-varying potentials produce
electromagnetic waves and how their spatial decay in the bistable liquid leads to phase
changes.
These transitions open a new and general way to control the spatio-temporal behaviour of
chemical reactions by directly manipulating the solvents' concentrations. When two or more
reagents are preferentially soluble in one of the mixture's components, field-induced
phase separation leads to acceleration of the reaction. When the reagents are soluble in
different solvents, field-induced demixing will lead to the reaction taking place at a
slow rate and at a two-dimensional surface. Additionally, the electric field allows us to
turn the reaction on or off. The numerical study and simulations will be complemented by
experiments. We will study theoretically and experimentally biochemical reactions. We will
find how actin-related structures are affected by field gradients. Using an electric field
as a tool we will control the rate of actin polymerisation. We will investigate if an
external field can damage cancer cells by disrupting their actin-related activity. The above
phenomena will be studied in a microfluidics environment. We will elucidate the separation
hydrodynamics occurring when thermodynamically miscible liquids flow in a channel and how
electric fields can reversibly create and destroy optical interfaces, as is relevant in
optofluidics. Chemical and biological reactions will be examined in the context of
lab-on-a-chip.
Summary
We will study phase transitions and chemical and biological reactions in liquid mixtures
in electric field gradients. These new phase transitions are essential in statistical
physics and thermodynamics. We will examine theoretically the complex and yet unexplored
phase ordering dynamics in which droplets nucleate and move under the external nonuniform
force. We will look in detail at the interfacial instabilities which develop when the
field is increased. We will investigate how time-varying potentials produce
electromagnetic waves and how their spatial decay in the bistable liquid leads to phase
changes.
These transitions open a new and general way to control the spatio-temporal behaviour of
chemical reactions by directly manipulating the solvents' concentrations. When two or more
reagents are preferentially soluble in one of the mixture's components, field-induced
phase separation leads to acceleration of the reaction. When the reagents are soluble in
different solvents, field-induced demixing will lead to the reaction taking place at a
slow rate and at a two-dimensional surface. Additionally, the electric field allows us to
turn the reaction on or off. The numerical study and simulations will be complemented by
experiments. We will study theoretically and experimentally biochemical reactions. We will
find how actin-related structures are affected by field gradients. Using an electric field
as a tool we will control the rate of actin polymerisation. We will investigate if an
external field can damage cancer cells by disrupting their actin-related activity. The above
phenomena will be studied in a microfluidics environment. We will elucidate the separation
hydrodynamics occurring when thermodynamically miscible liquids flow in a channel and how
electric fields can reversibly create and destroy optical interfaces, as is relevant in
optofluidics. Chemical and biological reactions will be examined in the context of
lab-on-a-chip.
Max ERC Funding
1 482 200 €
Duration
Start date: 2010-08-01, End date: 2015-07-31
Project acronym Fireworks
Project Celestial fireworks: revealing the physics of the time-variable sky
Researcher (PI) Avishay Gal-Yam
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE9, ERC-2016-COG
Summary Experimental time-domain astrophysics is on the verge of a new era as technological, computational, and operational progress combine to revolutionise the manner in which we study the time-variable sky. This proposal consolidates previous breakthrough work on wide-field surveys into a coherent program to advance our study of the variable sky on ever decreasing time-scales: from days, through hours, to minutes. We will watch how stars explode in real time in order to study the complex physics of stellar death, build new tools to handle and analyse the uniquely new data sets we are collecting, and shed light on some of the most fundamental questions in modern astrophysics: from the origin of the elements, via the explosions mechanism of supernova explosions, to the feedback processes that drive star formation and galaxy evolution.
Summary
Experimental time-domain astrophysics is on the verge of a new era as technological, computational, and operational progress combine to revolutionise the manner in which we study the time-variable sky. This proposal consolidates previous breakthrough work on wide-field surveys into a coherent program to advance our study of the variable sky on ever decreasing time-scales: from days, through hours, to minutes. We will watch how stars explode in real time in order to study the complex physics of stellar death, build new tools to handle and analyse the uniquely new data sets we are collecting, and shed light on some of the most fundamental questions in modern astrophysics: from the origin of the elements, via the explosions mechanism of supernova explosions, to the feedback processes that drive star formation and galaxy evolution.
Max ERC Funding
2 461 111 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym FOC
Project Foundations of Cryptographic Hardness
Researcher (PI) Iftach Ilan Haitner
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary A fundamental research challenge in modern cryptography is understanding the necessary hardness assumptions required to build different cryptographic primitives. Attempts to answer this question have gained tremendous success in the last 20-30 years. Most notably, it was shown that many highly complicated primitives can be based on the mere existence of one-way functions (i.e., easy to compute and hard to invert), while other primitives cannot be based on such functions. This research has yielded fundamental tools and concepts such as randomness extractors and computational notions of entropy. Yet many of the most fundamental questions remain unanswered.
Our first goal is to answer the fundamental question of whether cryptography can be based on the assumption that P not equal NP. Our second and third goals are to build a more efficient symmetric-key cryptographic primitives from one-way functions, and to establish effective methods for security amplification of cryptographic primitives. Succeeding in the second and last goals is likely to have great bearing on the way that we construct the very basic cryptographic primitives. A positive answer for the first question will be considered a dramatic result in the cryptography and computational complexity communities.
To address these goals, it is very useful to understand the relationship between different types and quantities of cryptographic hardness. Such understanding typically involves defining and manipulating different types of computational entropy, and comprehending the power of security reductions. We believe that this research will yield new concepts and techniques, with ramification beyond the realm of foundational cryptography.
Summary
A fundamental research challenge in modern cryptography is understanding the necessary hardness assumptions required to build different cryptographic primitives. Attempts to answer this question have gained tremendous success in the last 20-30 years. Most notably, it was shown that many highly complicated primitives can be based on the mere existence of one-way functions (i.e., easy to compute and hard to invert), while other primitives cannot be based on such functions. This research has yielded fundamental tools and concepts such as randomness extractors and computational notions of entropy. Yet many of the most fundamental questions remain unanswered.
Our first goal is to answer the fundamental question of whether cryptography can be based on the assumption that P not equal NP. Our second and third goals are to build a more efficient symmetric-key cryptographic primitives from one-way functions, and to establish effective methods for security amplification of cryptographic primitives. Succeeding in the second and last goals is likely to have great bearing on the way that we construct the very basic cryptographic primitives. A positive answer for the first question will be considered a dramatic result in the cryptography and computational complexity communities.
To address these goals, it is very useful to understand the relationship between different types and quantities of cryptographic hardness. Such understanding typically involves defining and manipulating different types of computational entropy, and comprehending the power of security reductions. We believe that this research will yield new concepts and techniques, with ramification beyond the realm of foundational cryptography.
Max ERC Funding
1 239 838 €
Duration
Start date: 2015-03-01, End date: 2021-02-28
Project acronym FORECASToneMONTH
Project Forecasting Surface Weather and Climate at One-Month Leads through Stratosphere-Troposphere Coupling
Researcher (PI) Chaim Israel Garfinkel
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE10, ERC-2015-STG
Summary Anomalies in surface temperatures, winds, and precipitation can significantly alter energy supply and demand, cause flooding, and cripple transportation networks. Better management of these impacts can be achieved by extending the duration of reliable predictions of the atmospheric circulation.
Polar stratospheric variability can impact surface weather for well over a month, and this proposed research presents a novel approach towards understanding the fundamentals of how this coupling occurs. Specifically, we are interested in: 1) how predictable are anomalies in the stratospheric circulation? 2) why do only some stratospheric events modify surface weather? and 3) what is the mechanism whereby stratospheric anomalies reach the surface? While this last question may appear academic, several studies indicate that stratosphere-troposphere coupling drives the midlatitude tropospheric response to climate change; therefore, a clearer understanding of the mechanisms will aid in the interpretation of the upcoming changes in the surface climate.
I propose a multi-pronged effort aimed at addressing these questions and improving monthly forecasting. First, carefully designed modelling experiments using a novel modelling framework will be used to clarify how, and under what conditions, stratospheric variability couples to tropospheric variability. Second, novel linkages between variability external to the stratospheric polar vortex and the stratospheric polar vortex will be pursued, thus improving our ability to forecast polar vortex variability itself. To these ends, my group will develop 1) an analytic model for Rossby wave propagation on the sphere, and 2) a simplified general circulation model, which captures the essential processes underlying stratosphere-troposphere coupling. By combining output from the new models, observational data, and output from comprehensive climate models, the connections between the stratosphere and surface climate will be elucidated.
Summary
Anomalies in surface temperatures, winds, and precipitation can significantly alter energy supply and demand, cause flooding, and cripple transportation networks. Better management of these impacts can be achieved by extending the duration of reliable predictions of the atmospheric circulation.
Polar stratospheric variability can impact surface weather for well over a month, and this proposed research presents a novel approach towards understanding the fundamentals of how this coupling occurs. Specifically, we are interested in: 1) how predictable are anomalies in the stratospheric circulation? 2) why do only some stratospheric events modify surface weather? and 3) what is the mechanism whereby stratospheric anomalies reach the surface? While this last question may appear academic, several studies indicate that stratosphere-troposphere coupling drives the midlatitude tropospheric response to climate change; therefore, a clearer understanding of the mechanisms will aid in the interpretation of the upcoming changes in the surface climate.
I propose a multi-pronged effort aimed at addressing these questions and improving monthly forecasting. First, carefully designed modelling experiments using a novel modelling framework will be used to clarify how, and under what conditions, stratospheric variability couples to tropospheric variability. Second, novel linkages between variability external to the stratospheric polar vortex and the stratospheric polar vortex will be pursued, thus improving our ability to forecast polar vortex variability itself. To these ends, my group will develop 1) an analytic model for Rossby wave propagation on the sphere, and 2) a simplified general circulation model, which captures the essential processes underlying stratosphere-troposphere coupling. By combining output from the new models, observational data, and output from comprehensive climate models, the connections between the stratosphere and surface climate will be elucidated.
Max ERC Funding
1 808 000 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym FQHE
Project Statistics of Fractionally Charged Quasi-Particles
Researcher (PI) Mordehai (Moty) Heiblum
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE3, ERC-2008-AdG
Summary The discovery of the fractional quantum Hall effect created a revolution in solid state research by introducing a new state of matter resulting from strong electron interactions. The new state is characterized by excitations (quasi-particles) that carry fractional charge, which are expected to obey fractional statistics. While odd denominator fractional states are expected to have an abelian statistics, the newly discovered 5/2 even denominator fractional state is expected to have a non-abelian statistics. Moreover, a large number of emerging proposals predict that the latter state can be employed for topological quantum computing ( Station Q was founded by Microsoft Corp. in order to pursue this goal). This proposal aims at studying the abelian and non-abelian fractional charges, and in particular to observe their peculiar statistics. While charges are preferably determined by measuring quantum shot noise, their statistics must be determined via interference experiments, where one particle goes around another. The experiments are very demanding since the even denominator fractions turn to be very fragile and thus can be observed only in the purest possible two dimensional electron gas and at the lowest temperatures. While until very recently such high quality samples were available only by a single grower (in the USA), we have the capability now to grow extremely pure samples with profound even denominator states. As will be detailed in the proposal, we have all the necessary tools to study charge and statistics of these fascinating excitations, due to our experience in crystal growth, shot noise and interferometry measurements.
Summary
The discovery of the fractional quantum Hall effect created a revolution in solid state research by introducing a new state of matter resulting from strong electron interactions. The new state is characterized by excitations (quasi-particles) that carry fractional charge, which are expected to obey fractional statistics. While odd denominator fractional states are expected to have an abelian statistics, the newly discovered 5/2 even denominator fractional state is expected to have a non-abelian statistics. Moreover, a large number of emerging proposals predict that the latter state can be employed for topological quantum computing ( Station Q was founded by Microsoft Corp. in order to pursue this goal). This proposal aims at studying the abelian and non-abelian fractional charges, and in particular to observe their peculiar statistics. While charges are preferably determined by measuring quantum shot noise, their statistics must be determined via interference experiments, where one particle goes around another. The experiments are very demanding since the even denominator fractions turn to be very fragile and thus can be observed only in the purest possible two dimensional electron gas and at the lowest temperatures. While until very recently such high quality samples were available only by a single grower (in the USA), we have the capability now to grow extremely pure samples with profound even denominator states. As will be detailed in the proposal, we have all the necessary tools to study charge and statistics of these fascinating excitations, due to our experience in crystal growth, shot noise and interferometry measurements.
Max ERC Funding
2 000 000 €
Duration
Start date: 2009-01-01, End date: 2013-12-31
Project acronym FRACTALSANDMETRICNT
Project Fractals, algebraic dynamics and metric number theory
Researcher (PI) Michael Hochman
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE1, ERC-2012-StG_20111012
Summary We propose to study the fractal geometry of invariant sets for endomorphisms of compact abelian groups, specifically a family of conjectures by Furstenberg on the dimensions of orbit closures under such dynamics, and on the size of sums and intersections of invariant sets. These conjectures are related to problems on expansion in integer bases, in Diophantine approximation, measure rigidity, analysis and equidistribution. The project focuses on the conjectures themselves and some related problems, e.g. Bernoulli convolutions, and on applications to equidistribution on tori. Our approach combines tools from ergodic theory, geometric measure theory and additive combinatorics, building on recent progress in these fields and recent partial results towards the main conjectures.
Summary
We propose to study the fractal geometry of invariant sets for endomorphisms of compact abelian groups, specifically a family of conjectures by Furstenberg on the dimensions of orbit closures under such dynamics, and on the size of sums and intersections of invariant sets. These conjectures are related to problems on expansion in integer bases, in Diophantine approximation, measure rigidity, analysis and equidistribution. The project focuses on the conjectures themselves and some related problems, e.g. Bernoulli convolutions, and on applications to equidistribution on tori. Our approach combines tools from ergodic theory, geometric measure theory and additive combinatorics, building on recent progress in these fields and recent partial results towards the main conjectures.
Max ERC Funding
1 107 000 €
Duration
Start date: 2012-10-01, End date: 2018-09-30
Project acronym FRACTFRICT
Project Fracture and Friction: Rapid Dynamics of Material Failure
Researcher (PI) Jay Fineberg
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE3, ERC-2010-AdG_20100224
Summary FractFrict is a comprehensive study of the space-time dynamics that lead to the failure of both bulk materials and frictionally bound interfaces. In these systems, failure is precipitated by rapidly moving singular fields at the tips of propagating cracks or crack-like fronts that cause material damage at microscopic scales. These generate damage that is macroscopically reflected as characteristic large-scale, modes of material failure. Thus, the structure of the fields that microscopically drive failure is critically important for an overall understanding of how macroscopic failure occurs.
The innovative real-time measurements proposed here will provide fundamental understanding of the form of the singular fields, their modes of regularization and their relation to the resultant macroscopic modes of failure. Encompassing different classes of bulk materials and material interfaces.
We aim to:
[1] To establish a fundamental understanding of the dynamics of the near-tip singular fields, their regularization modes and how they couple to the macroscopic dynamics in both frictional motion and fracture.
[2] To determine the types of singular failure processes in different classes of materials and interfaces (e.g. the brittle to ductile transition in amorphous materials, the role of fast fracture processes in frictional motion).
[3] To establish local (microscopic) laws of friction/failure and how they evolve into their macroscopic counterparts
[4]. To identify the existence and origins of crack instabilities in bulk and interface failure
The insights obtained in this research will enable us to manipulate and/or predict material failure modes. The results of this study will shed considerable new light on fundamental open questions in fields as diverse as material design, tribology and geophysics.
Summary
FractFrict is a comprehensive study of the space-time dynamics that lead to the failure of both bulk materials and frictionally bound interfaces. In these systems, failure is precipitated by rapidly moving singular fields at the tips of propagating cracks or crack-like fronts that cause material damage at microscopic scales. These generate damage that is macroscopically reflected as characteristic large-scale, modes of material failure. Thus, the structure of the fields that microscopically drive failure is critically important for an overall understanding of how macroscopic failure occurs.
The innovative real-time measurements proposed here will provide fundamental understanding of the form of the singular fields, their modes of regularization and their relation to the resultant macroscopic modes of failure. Encompassing different classes of bulk materials and material interfaces.
We aim to:
[1] To establish a fundamental understanding of the dynamics of the near-tip singular fields, their regularization modes and how they couple to the macroscopic dynamics in both frictional motion and fracture.
[2] To determine the types of singular failure processes in different classes of materials and interfaces (e.g. the brittle to ductile transition in amorphous materials, the role of fast fracture processes in frictional motion).
[3] To establish local (microscopic) laws of friction/failure and how they evolve into their macroscopic counterparts
[4]. To identify the existence and origins of crack instabilities in bulk and interface failure
The insights obtained in this research will enable us to manipulate and/or predict material failure modes. The results of this study will shed considerable new light on fundamental open questions in fields as diverse as material design, tribology and geophysics.
Max ERC Funding
2 265 399 €
Duration
Start date: 2010-12-01, End date: 2016-11-30
Project acronym FSC
Project Fast and Sound Cryptography: From Theoretical Foundations to Practical Constructions
Researcher (PI) Alon Rosen
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Much currently deployed cryptography is designed using more “art'” than “science,” and most of the schemes used in practice lack rigorous justification for their security. While theoretically sound designs do exist, they tend to be quite a bit slower to run and hence are not realistic from a practical point of view. This gap is especially evident in “low-level” cryptographic primitives, which are the building blocks that ultimately process the largest quantities of data.
Recent years have witnessed dramatic progress in the understanding of highly-parallelizable (local) cryptography, and in the construction of schemes based on the mathematics of geometric objects called lattices. Besides being based on firm theoretical foundations, these schemes also allow for very efficient implementations, especially on modern microprocessors. Yet despite all this recent progress, there has not yet been a major effort specifically focused on bringing the efficiency of such constructions as close as possible to practicality; this project will do exactly that.
The main goal of the Fast and Sound Cryptography project is to develop new tools and techniques that would lead to practical and theoretically sound implementations of cryptographic primitives. We plan to draw ideas from both theory and practice, and expect their combination to generate new questions, conjectures, and insights. A considerable fraction of our efforts will be devoted to demonstrating the efficiency of our constructions. This will be achieved by a concrete setting of parameters, allowing for cryptanalysis and direct performance comparison to popular designs.
While our initial focus will be on low-level primitives, we expect our research to also have direct impact on the practical efficiency of higher-level cryptographic tasks. Indeed, many of the recent improvements in the efficiency of lattice-based public-key cryptography can be traced back to research on the efficiency of lattice-based hash functions."
Summary
"Much currently deployed cryptography is designed using more “art'” than “science,” and most of the schemes used in practice lack rigorous justification for their security. While theoretically sound designs do exist, they tend to be quite a bit slower to run and hence are not realistic from a practical point of view. This gap is especially evident in “low-level” cryptographic primitives, which are the building blocks that ultimately process the largest quantities of data.
Recent years have witnessed dramatic progress in the understanding of highly-parallelizable (local) cryptography, and in the construction of schemes based on the mathematics of geometric objects called lattices. Besides being based on firm theoretical foundations, these schemes also allow for very efficient implementations, especially on modern microprocessors. Yet despite all this recent progress, there has not yet been a major effort specifically focused on bringing the efficiency of such constructions as close as possible to practicality; this project will do exactly that.
The main goal of the Fast and Sound Cryptography project is to develop new tools and techniques that would lead to practical and theoretically sound implementations of cryptographic primitives. We plan to draw ideas from both theory and practice, and expect their combination to generate new questions, conjectures, and insights. A considerable fraction of our efforts will be devoted to demonstrating the efficiency of our constructions. This will be achieved by a concrete setting of parameters, allowing for cryptanalysis and direct performance comparison to popular designs.
While our initial focus will be on low-level primitives, we expect our research to also have direct impact on the practical efficiency of higher-level cryptographic tasks. Indeed, many of the recent improvements in the efficiency of lattice-based public-key cryptography can be traced back to research on the efficiency of lattice-based hash functions."
Max ERC Funding
1 498 214 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym FTHPC
Project Fault Tolerant High Performance Computing
Researcher (PI) Oded Schwartz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Summary
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Max ERC Funding
1 824 467 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym FUNMANIA
Project Functional nano Materials for Neuronal Interfacing Applications
Researcher (PI) Yael Hanein
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE7, ERC-2012-StG_20111012
Summary Recent advances in nano technologies provide an exciting new tool-box best suited for stimulating and monitoring neurons at a very high accuracy and with improved bio-compatibility. In this project we propose the development of an innovative nano-material based platform to interface with neurons in-vivo, with unprecedented resolution. In particular we aim to form the building blocks for future sight restoration devices. By doing so we will address one of the most challenging and important applications in the realm of in-vivo neuronal stimulation: high-acuity artificial retina.
Existing technologies in the field of artificial retinas offer only very limited acuity and a radically new approach is needed to make the needed leap to achieve high-resolution stimulation. In this project we propose the development of flexible, electrically conducting, optically addressable and vertically aligned carbon nanotube based electrodes as a novel platform for targeting neurons at high fidelity. The morphology and density of the aligned tubes will mimic that of the retina photo-receptors to achieve record-high resolution.
The most challenging element of the project is the transduction from an optical signal to electrical activations at high resolution placing this effort at the forefront of nano-science and nano-technology research. To deal with this difficult challenge, vertically aligned carbon nanotubes will be conjugated with additional engineered materials, such as conducting polymers and quantum dots to build a supreme platform allowing unprecedented resolution and bio-compatibility. Ultimately, in this project we will focus on devising materials and processes that will become the building blocks of future devices so high density retinal implants and consequent sight restoration will become a reality in the conceivable future.
Summary
Recent advances in nano technologies provide an exciting new tool-box best suited for stimulating and monitoring neurons at a very high accuracy and with improved bio-compatibility. In this project we propose the development of an innovative nano-material based platform to interface with neurons in-vivo, with unprecedented resolution. In particular we aim to form the building blocks for future sight restoration devices. By doing so we will address one of the most challenging and important applications in the realm of in-vivo neuronal stimulation: high-acuity artificial retina.
Existing technologies in the field of artificial retinas offer only very limited acuity and a radically new approach is needed to make the needed leap to achieve high-resolution stimulation. In this project we propose the development of flexible, electrically conducting, optically addressable and vertically aligned carbon nanotube based electrodes as a novel platform for targeting neurons at high fidelity. The morphology and density of the aligned tubes will mimic that of the retina photo-receptors to achieve record-high resolution.
The most challenging element of the project is the transduction from an optical signal to electrical activations at high resolution placing this effort at the forefront of nano-science and nano-technology research. To deal with this difficult challenge, vertically aligned carbon nanotubes will be conjugated with additional engineered materials, such as conducting polymers and quantum dots to build a supreme platform allowing unprecedented resolution and bio-compatibility. Ultimately, in this project we will focus on devising materials and processes that will become the building blocks of future devices so high density retinal implants and consequent sight restoration will become a reality in the conceivable future.
Max ERC Funding
1 499 560 €
Duration
Start date: 2012-10-01, End date: 2018-09-30
Project acronym GELANDERINDGEOMRGD
Project Independence of Group Elements and Geometric Rigidity
Researcher (PI) Tsachik Gelander
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE1, ERC-2007-StG
Summary The proposed research contains two main directions in group theory and geometry: Independence of Group Elements and Geometric Rigidity. The first consists of problems related to the existence of free subgroups, uniform and effective ways of producing such, and analogous questions for finite groups where the analog of independent elements are elements for which the Cayley graph has a large girth, or non-small expanding constant. This line of research began almost a century ago and contains many important works including works of Hausdorff, Banach and Tarski on paradoxical decompositions, works of Margulis, Sullivan and Drinfeld on the Banach-Ruziewicz problem, the classical Tits Alternative, Margulis-Soifer result on maximal subgroups, the recent works of Eskin-Mozes-Oh and Bourgain-Gamburd, etc. Among the famous questions is Milnor's problem on the exponential verses polynomial growth for f.p. groups, originally stated for f.g. groups but reformulated after Grigorchuk's counterexample. Related works of the PI includes a joint work with Breuillard on the topological Tits alternative, where several well known conjectures were solved, e.g. the foliated version of Milnor's problem conjectured by Carriere, and on the uniform Tits alternative which significantly improved Tits' and EMO theorems. A joint work with Glasner on primitive groups where in particular a conjecture of Higman and Neumann was solved. A paper on the deformation varieties where a conjecture of Margulis and Soifer and a conjecture of Goldman were proved. The second involves extensions of Margulis' and Mostow's rigidity theorems to actions of lattices in general topological groups on metric spaces, and extensions of Kazhdan's property (T) for group actions on Banach and metric spaces. This area is very active today. Related work of the PI includes his joint work with Karlsson and Margulis on generalized harmonic maps, and his joint work with Bader, Furman and Monod on actions on Banach spaces.
Summary
The proposed research contains two main directions in group theory and geometry: Independence of Group Elements and Geometric Rigidity. The first consists of problems related to the existence of free subgroups, uniform and effective ways of producing such, and analogous questions for finite groups where the analog of independent elements are elements for which the Cayley graph has a large girth, or non-small expanding constant. This line of research began almost a century ago and contains many important works including works of Hausdorff, Banach and Tarski on paradoxical decompositions, works of Margulis, Sullivan and Drinfeld on the Banach-Ruziewicz problem, the classical Tits Alternative, Margulis-Soifer result on maximal subgroups, the recent works of Eskin-Mozes-Oh and Bourgain-Gamburd, etc. Among the famous questions is Milnor's problem on the exponential verses polynomial growth for f.p. groups, originally stated for f.g. groups but reformulated after Grigorchuk's counterexample. Related works of the PI includes a joint work with Breuillard on the topological Tits alternative, where several well known conjectures were solved, e.g. the foliated version of Milnor's problem conjectured by Carriere, and on the uniform Tits alternative which significantly improved Tits' and EMO theorems. A joint work with Glasner on primitive groups where in particular a conjecture of Higman and Neumann was solved. A paper on the deformation varieties where a conjecture of Margulis and Soifer and a conjecture of Goldman were proved. The second involves extensions of Margulis' and Mostow's rigidity theorems to actions of lattices in general topological groups on metric spaces, and extensions of Kazhdan's property (T) for group actions on Banach and metric spaces. This area is very active today. Related work of the PI includes his joint work with Karlsson and Margulis on generalized harmonic maps, and his joint work with Bader, Furman and Monod on actions on Banach spaces.
Max ERC Funding
750 000 €
Duration
Start date: 2008-07-01, End date: 2013-12-31
Project acronym GeneREFORM
Project Genetically Encoded Multicolor Reporter Systems For Multiplexed MRI
Researcher (PI) Amnon Bar-Shir
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE LTD
Call Details Starting Grant (StG), PE5, ERC-2015-STG
Summary In order to fully understand the complexity of biological processes that are reflected by simultaneous occurrences of intra and inter-cellular events, multiplexed imaging platforms are needed. Fluorescent reporter genes, with their “multicolor” imaging capabilities, have revolutionized science and their founders have been awarded the Nobel Prize. Nevertheless, the light signal source of these reporters, which restricts their use in deep tissues and in large animals (and potentially in humans), calls for alternatives.
Reporter genes for MRI, although in their infancy, showed several exceptionalities, including the ability to longitudinal study the same subject with unlimited tissue penetration and to coregister information from reporter gene expression with high-resolution anatomical images. Inspired by the multicolor capabilities of optical reporter genes, this proposal aims to develop, optimize, and implement genetically engineered reporter systems for MRI with artificial “multicolor” characteristics. Capitalizing on (i) the Chemical Exchange Saturation Transfer (CEST)-MRI contrast mechanism that allows the use of small bioorganic molecules as MRI sensors, (ii) the frequency encoding, color-like features of CEST, and on (iii) enzyme engineering procedures that allow the optimization of enzymatic activity for a desired substrate, a “multicolor” genetically encoded MRI reporter system is proposed.
By (a) synthesizing libraries of non-natural nucleosides (“reporter probes”) to generate artificially “colored” CEST contrast, and (b) performing directed evolution of deoxyribonucleoside kinase (dNK) enzymes (“reporter genes”) to phosphorylate those nucleosides, the “multicolor” genetically encoded MRI “reporter system” will be created. The orthogonally of the obtained pairs of substrate (CEST sensor)/ enzyme (mutant dNK) will allow their simultaneous use as a genetically encoded reporter system for in vivo “multicolor” monitoring of reporter gene expression with MRI.
Summary
In order to fully understand the complexity of biological processes that are reflected by simultaneous occurrences of intra and inter-cellular events, multiplexed imaging platforms are needed. Fluorescent reporter genes, with their “multicolor” imaging capabilities, have revolutionized science and their founders have been awarded the Nobel Prize. Nevertheless, the light signal source of these reporters, which restricts their use in deep tissues and in large animals (and potentially in humans), calls for alternatives.
Reporter genes for MRI, although in their infancy, showed several exceptionalities, including the ability to longitudinal study the same subject with unlimited tissue penetration and to coregister information from reporter gene expression with high-resolution anatomical images. Inspired by the multicolor capabilities of optical reporter genes, this proposal aims to develop, optimize, and implement genetically engineered reporter systems for MRI with artificial “multicolor” characteristics. Capitalizing on (i) the Chemical Exchange Saturation Transfer (CEST)-MRI contrast mechanism that allows the use of small bioorganic molecules as MRI sensors, (ii) the frequency encoding, color-like features of CEST, and on (iii) enzyme engineering procedures that allow the optimization of enzymatic activity for a desired substrate, a “multicolor” genetically encoded MRI reporter system is proposed.
By (a) synthesizing libraries of non-natural nucleosides (“reporter probes”) to generate artificially “colored” CEST contrast, and (b) performing directed evolution of deoxyribonucleoside kinase (dNK) enzymes (“reporter genes”) to phosphorylate those nucleosides, the “multicolor” genetically encoded MRI “reporter system” will be created. The orthogonally of the obtained pairs of substrate (CEST sensor)/ enzyme (mutant dNK) will allow their simultaneous use as a genetically encoded reporter system for in vivo “multicolor” monitoring of reporter gene expression with MRI.
Max ERC Funding
1 478 284 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym GENEXP
Project Gene Expression Explored in Space and Time Using Single Gene and Single Molecule Analysis
Researcher (PI) Yaron Shav-Tal
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), LS1, ERC-2010-StG_20091118
Summary "Live-cell imaging combined with kinetic analyses has provided new biological insights on the gene expression pathway. However, such studies in mammalian cells typically require use of exogenous over-expressed gene constructs, which often form large tandem gene arrays and usually lack the complete endogenous regulatory sequences. It is therefore imperative to design methodology for analyzing gene expression kinetics of single alleles of endogenous genes. While certain steps have been taken in this direction, there are many experimental obstacles standing in the way of a robust genome-wide system for the in vivo examination of endogenous gene expression within the natural nuclear environment. GENEXP sets out to provide such a system.
It will start with methodology for robust tagging of a multitude of endogenous genes and their transcribed mRNAs in human cells using the ""CD tagging"" approach. Thereby, in vivo mRNA synthesis at the nuclear site of RNA birth will be explored in a unique manner. A high-resolution study of gene expression, in particular mRNA transcription and mRNA export, under endogenous cellular context and using a genome-wide live-cell approach will be performed. GENEXP will specifically focus on the:
i) Transcriptional kinetics of endogenous genes in single cells and cell populations;
ii) Kinetics of mRNA export on the single molecule level;
iii) Examination of the protein composition of endogenous mRNPs;
iv) High throughput scan for drugs that affect gene expression and mRNA export.
Altogether, GENEXP will provide breakthrough capability in kinetically quantifying the gene expression pathway of a large variety of endogenous genes, and the ability to examine the generated molecules on the single-molecule level. This will be done within their normal genomic and biological environment, at the single-allele level."
Summary
"Live-cell imaging combined with kinetic analyses has provided new biological insights on the gene expression pathway. However, such studies in mammalian cells typically require use of exogenous over-expressed gene constructs, which often form large tandem gene arrays and usually lack the complete endogenous regulatory sequences. It is therefore imperative to design methodology for analyzing gene expression kinetics of single alleles of endogenous genes. While certain steps have been taken in this direction, there are many experimental obstacles standing in the way of a robust genome-wide system for the in vivo examination of endogenous gene expression within the natural nuclear environment. GENEXP sets out to provide such a system.
It will start with methodology for robust tagging of a multitude of endogenous genes and their transcribed mRNAs in human cells using the ""CD tagging"" approach. Thereby, in vivo mRNA synthesis at the nuclear site of RNA birth will be explored in a unique manner. A high-resolution study of gene expression, in particular mRNA transcription and mRNA export, under endogenous cellular context and using a genome-wide live-cell approach will be performed. GENEXP will specifically focus on the:
i) Transcriptional kinetics of endogenous genes in single cells and cell populations;
ii) Kinetics of mRNA export on the single molecule level;
iii) Examination of the protein composition of endogenous mRNPs;
iv) High throughput scan for drugs that affect gene expression and mRNA export.
Altogether, GENEXP will provide breakthrough capability in kinetically quantifying the gene expression pathway of a large variety of endogenous genes, and the ability to examine the generated molecules on the single-molecule level. This will be done within their normal genomic and biological environment, at the single-allele level."
Max ERC Funding
1 498 510 €
Duration
Start date: 2010-12-01, End date: 2015-11-30
Project acronym GeoArchMag
Project Beyond the Holocene Geomagnetic field resolution
Researcher (PI) Ron Shaar
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE10, ERC-2018-STG
Summary For decades the Holocene has been considered a flat and “boring” epoch from the standpoint of
paleomagnetism, mainly due to insufficient resolution of the available paleomagnetic data. However, recent
archaeomagnetic data have revealed that the Holocene geomagnetic field is anything but stable – presenting
puzzling intervals of extreme decadal-scale fluctuations and unexpected departures from a simple dipolar field
structure. This new information introduced an entirely new paradigm to the study of the geomagnetic field and
to a wide range of research areas relying on paleomagnetic data, such as geochronology, climate research, and
geodynamo exploration.
This proposal aims at breaking the resolution limits in paleomagnetism, and providing a continuous
time series of the geomagnetic field vector throughout the Holocene at decadal resolution and
unprecedented accuracy. To this end I will use an innovative assemblage of data sources, jointly unique to
the Levant, including rare archaeological finds, annual laminated stalagmites, varved sediments, and arid
playa deposits. Together, these sources can provide unprecedented yearly resolution, whereby the “absolute”
archaeomagnetic data can calibrate “relative” terrestrial data.
The geomagnetic data will define an innovative absolute geomagnetic chronology that will be used to
synchronize cosmogenic 10Be data and an extensive body of paleo-climatic indicators. With these in hand, I
will address four ground-breaking problems:
I) Chronology: Developing dating technique for resolving critical controversies in Levantine archaeology and
Quaternary geology.
II) Geophysics: Exploring fine-scale geodynamo features in Earth’s core from new generations of global
geomagnetic models.
III) Cosmogenics: Correlating fast geomagnetic variations with cosmogenic isotope production rate.
IV) Climate: Testing one of the most challenging controversial questions in geomagnetism: “Does the Earth's
magnetic field play a role in climate changes?”
Summary
For decades the Holocene has been considered a flat and “boring” epoch from the standpoint of
paleomagnetism, mainly due to insufficient resolution of the available paleomagnetic data. However, recent
archaeomagnetic data have revealed that the Holocene geomagnetic field is anything but stable – presenting
puzzling intervals of extreme decadal-scale fluctuations and unexpected departures from a simple dipolar field
structure. This new information introduced an entirely new paradigm to the study of the geomagnetic field and
to a wide range of research areas relying on paleomagnetic data, such as geochronology, climate research, and
geodynamo exploration.
This proposal aims at breaking the resolution limits in paleomagnetism, and providing a continuous
time series of the geomagnetic field vector throughout the Holocene at decadal resolution and
unprecedented accuracy. To this end I will use an innovative assemblage of data sources, jointly unique to
the Levant, including rare archaeological finds, annual laminated stalagmites, varved sediments, and arid
playa deposits. Together, these sources can provide unprecedented yearly resolution, whereby the “absolute”
archaeomagnetic data can calibrate “relative” terrestrial data.
The geomagnetic data will define an innovative absolute geomagnetic chronology that will be used to
synchronize cosmogenic 10Be data and an extensive body of paleo-climatic indicators. With these in hand, I
will address four ground-breaking problems:
I) Chronology: Developing dating technique for resolving critical controversies in Levantine archaeology and
Quaternary geology.
II) Geophysics: Exploring fine-scale geodynamo features in Earth’s core from new generations of global
geomagnetic models.
III) Cosmogenics: Correlating fast geomagnetic variations with cosmogenic isotope production rate.
IV) Climate: Testing one of the most challenging controversial questions in geomagnetism: “Does the Earth's
magnetic field play a role in climate changes?”
Max ERC Funding
1 786 381 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym GLC
Project Langlands correspondence and its variants
Researcher (PI) David Kazhdan
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2009-AdG
Summary Sometimes in the sciences there are different yet complementary descriptions for the same object. This extends to the particle-wave duality of quantum mechanics; one mathematical analog of this duality is the Fourier transform. Questions that are difficult when formulated in one language of science may become simple when interpreted in another. The Langlands conjecture posits the existence of a correspondence between problems in arithmetic and in Representation Theory. The Langlands conjecture has only been proven for a limited number of cases, but even this has solved problems such as the famous Fermat conjecture. The aim of this project is to continue study of the "classical" aspects of the Langlands conjecture and to extend the conjecture to the quantum geometric Langlands correspondence, higher-dimensional fields, Kac-Moody groups (with D.Gaitsgory: quantum Langlands correspondence; D.Gaitsgory and E. Hrushevsi: groups over higher-dimensional fields; A. Braverman: Kac-Moody groups; R. Bezrukavnikov, S.Debacker, Y.Varshavsky: classical aspects of the correspondence; A. Berenstein: geometric crystals and crystal bases). The quantum case is much more symmetric than the classical case and can lead in the limit q->0 to new insights into the classical case. The quantum case is also related to the multiple Dirichlet series. New results in the quantum case would lead to progress in understanding important Number Theoretic questions. Extending the Langlands correspondence to groups over higher-dimensional fields could substantially enlarge its applicability. Studying Kac-Moody groups would provide tools for the new important class of L-functions. This progress could lead to a proof of the existence of the analytic continuation of classical L-functions. The geometric Langlands correspondence is closely related to T-symmetry in 4-dimensional gauge theory and the understanding of this relation is important for both Mathematics and Physics.
Summary
Sometimes in the sciences there are different yet complementary descriptions for the same object. This extends to the particle-wave duality of quantum mechanics; one mathematical analog of this duality is the Fourier transform. Questions that are difficult when formulated in one language of science may become simple when interpreted in another. The Langlands conjecture posits the existence of a correspondence between problems in arithmetic and in Representation Theory. The Langlands conjecture has only been proven for a limited number of cases, but even this has solved problems such as the famous Fermat conjecture. The aim of this project is to continue study of the "classical" aspects of the Langlands conjecture and to extend the conjecture to the quantum geometric Langlands correspondence, higher-dimensional fields, Kac-Moody groups (with D.Gaitsgory: quantum Langlands correspondence; D.Gaitsgory and E. Hrushevsi: groups over higher-dimensional fields; A. Braverman: Kac-Moody groups; R. Bezrukavnikov, S.Debacker, Y.Varshavsky: classical aspects of the correspondence; A. Berenstein: geometric crystals and crystal bases). The quantum case is much more symmetric than the classical case and can lead in the limit q->0 to new insights into the classical case. The quantum case is also related to the multiple Dirichlet series. New results in the quantum case would lead to progress in understanding important Number Theoretic questions. Extending the Langlands correspondence to groups over higher-dimensional fields could substantially enlarge its applicability. Studying Kac-Moody groups would provide tools for the new important class of L-functions. This progress could lead to a proof of the existence of the analytic continuation of classical L-functions. The geometric Langlands correspondence is closely related to T-symmetry in 4-dimensional gauge theory and the understanding of this relation is important for both Mathematics and Physics.
Max ERC Funding
1 277 060 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym GLYCOTRACKER
Project Tracking Glycosylations with Targeted, Molecule-Sized “Noses”
Researcher (PI) David Margulies
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE5, ERC-2013-StG
Summary Glycobiology is poised to be the next revolution in biology and medicine; however, technical difficulties in detecting and characterizing glycans prevent many biologists from entering this field, thus hampering new discoveries and innovations. Herein, we propose developing a conceptually novel technology that will allow straightforward identification of specific glycosylation patterns in biofluids and in live cells. Distinct glycosylation states will be differentiated by developing “artificial noses” in the size of a single molecule, whereas selectivity toward particular glycoproteins will be obtained by attaching them to specific protein binders. To achieve high sensitivity and accuracy, several innovations in molecular recognition and fluorescence signalling are integrated into the design of these unconventional molecular analytical devices.
One of the most important motivations for developing these sensors lies in their potential to diagnose a variety of diseases in their early stages. For example, we describe ways by which prostate cancer could be rapidly and accurately detected by a simple blood test that analyzes the glycosylation profile of the prostate-specific antigen (PSA). Another exceptional feature of these molecular analytical devices is their ability to differentiate between glycosylation patterns of specific proteins in live cells. This will solve an immense challenge in analytical glycobiology and will allow one to study how glycosylation contributes to diverse cell-signalling pathways. Finally, in the context of molecular-scale analytical devices, the proposed methodology is exceptional. We will show how “artificial noses” can be designed to target nanometric objects (e.g. protein surfaces) and operate in confined microscopoic spaces (e.g. cells), which macroscopic arrays cannot address. Taken together, we expect that the proposed technology will break new ground in medical diagnosis, cell biology, and biosensing technologies.
Summary
Glycobiology is poised to be the next revolution in biology and medicine; however, technical difficulties in detecting and characterizing glycans prevent many biologists from entering this field, thus hampering new discoveries and innovations. Herein, we propose developing a conceptually novel technology that will allow straightforward identification of specific glycosylation patterns in biofluids and in live cells. Distinct glycosylation states will be differentiated by developing “artificial noses” in the size of a single molecule, whereas selectivity toward particular glycoproteins will be obtained by attaching them to specific protein binders. To achieve high sensitivity and accuracy, several innovations in molecular recognition and fluorescence signalling are integrated into the design of these unconventional molecular analytical devices.
One of the most important motivations for developing these sensors lies in their potential to diagnose a variety of diseases in their early stages. For example, we describe ways by which prostate cancer could be rapidly and accurately detected by a simple blood test that analyzes the glycosylation profile of the prostate-specific antigen (PSA). Another exceptional feature of these molecular analytical devices is their ability to differentiate between glycosylation patterns of specific proteins in live cells. This will solve an immense challenge in analytical glycobiology and will allow one to study how glycosylation contributes to diverse cell-signalling pathways. Finally, in the context of molecular-scale analytical devices, the proposed methodology is exceptional. We will show how “artificial noses” can be designed to target nanometric objects (e.g. protein surfaces) and operate in confined microscopoic spaces (e.g. cells), which macroscopic arrays cannot address. Taken together, we expect that the proposed technology will break new ground in medical diagnosis, cell biology, and biosensing technologies.
Max ERC Funding
1 398 429 €
Duration
Start date: 2013-10-01, End date: 2018-09-30
Project acronym GMODGAMMADYNAMICS
Project Dynamics on homogeneous spaces, spectra and arithmetic
Researcher (PI) Elon Lindenstrauss
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2010-AdG_20100224
Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups,
We propose to tackle the central open problems in the area, including understanding actions of diagonal groups on homogeneous spaces without an entropy assumption, a related conjecture of Furstenberg about measures on R / Z invariant under multiplication by 2 and 3, and obtaining a quantitative understanding of equidistribution properties of unipotent flows and groups generated by unipotents.
This has applications in arithmetic, Diophantine approximations, the spectral theory of homogeneous spaces, mathematical physics, and other fields. Connections to arithmetic combinatorics will be pursued.
Summary
We consider the dynamics of actions on homogeneous spaces of algebraic groups,
We propose to tackle the central open problems in the area, including understanding actions of diagonal groups on homogeneous spaces without an entropy assumption, a related conjecture of Furstenberg about measures on R / Z invariant under multiplication by 2 and 3, and obtaining a quantitative understanding of equidistribution properties of unipotent flows and groups generated by unipotents.
This has applications in arithmetic, Diophantine approximations, the spectral theory of homogeneous spaces, mathematical physics, and other fields. Connections to arithmetic combinatorics will be pursued.
Max ERC Funding
1 229 714 €
Duration
Start date: 2011-01-01, End date: 2016-12-31
Project acronym GNOC
Project Towards a Gaussian Network-on-Chip
Researcher (PI) Isaac Keslassy
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE5, ERC-2007-StG
Summary As chip multi-processor architectures are replacing single-processor architectures and reshaping the semiconductor industry, chip designers can hardly use their old models and benchmarks anymore. While designers were used to deterministic and reliable performance in the chips, they now face networks with unreliable traffic patterns, unreliable throughput and unreliable delays, hence making it hard to provide any guaranteed Quality-of-Service (QoS). In this proposal, we argue that chip designers should focus on the possible set of traffic patterns in their Network-on-Chip (NoC) architectures. We first show how to provide deterministic QoS guarantees by exploiting these patterns. Then, we explain why the cost of providing deterministic guarantees might become prohibitive, and defend an alternative statistical approach that can significantly lower the area and power. To do so, we introduce Gaussian-based NoC models, and show how they can be used to evaluate link loads, delays and throughputs, as well as redesign the routing and capacity allocation algorithms. Finally, we argue that these models could effectively complement current benchmarks, and should be a central component in the toolbox of the future NoC designer.
Summary
As chip multi-processor architectures are replacing single-processor architectures and reshaping the semiconductor industry, chip designers can hardly use their old models and benchmarks anymore. While designers were used to deterministic and reliable performance in the chips, they now face networks with unreliable traffic patterns, unreliable throughput and unreliable delays, hence making it hard to provide any guaranteed Quality-of-Service (QoS). In this proposal, we argue that chip designers should focus on the possible set of traffic patterns in their Network-on-Chip (NoC) architectures. We first show how to provide deterministic QoS guarantees by exploiting these patterns. Then, we explain why the cost of providing deterministic guarantees might become prohibitive, and defend an alternative statistical approach that can significantly lower the area and power. To do so, we introduce Gaussian-based NoC models, and show how they can be used to evaluate link loads, delays and throughputs, as well as redesign the routing and capacity allocation algorithms. Finally, we argue that these models could effectively complement current benchmarks, and should be a central component in the toolbox of the future NoC designer.
Max ERC Funding
582 500 €
Duration
Start date: 2008-08-01, End date: 2012-07-31
Project acronym GRB-SN
Project The Gamma Ray Burst – Supernova Connection
and Shock Breakout Physics
Researcher (PI) Ehud Nakar
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE9, ERC-2011-StG_20101014
Summary Long gamma ray bursts (long GRBs) and core-collapse supernovae (CCSNe) are two of the most spectacular explosions in the Universe. They are a focal point of research for many reasons. Nevertheless, despite considerable effort during the last several decades, there are still many fundamental open questions regarding their physics.
Long GRBs and CCSNe are related. We know that they are both an outcome of a massive star collapse, where in some cases, such collapse produces simultaneously a GRB and a SN. However, we do not know how a single stellar collapse can produce these two apparently very different explosions. The GRB-SN connection raises many questions, but it also offers new opportunities to learn on the two types of explosions.
The focus of the proposed research is on the connection between CCSNe and GRBs, and on the physics of shock breakout. As I explain in this proposal, shock breakouts play an important role in this connection and therefore, I will develop a comprehensive theory of relativistic and Newtonian shock breakout. In addition, I will study the propagation of relativistic jets inside stars, including the effects of jet propagation and GRB engine on the emerging SN. This will be done by a set of interrelated projects that carefully combine analytic calculations and numerical simulations. Together, these projects will be the first to model a GRB and a SN that are simultaneously produced in a single star. This in turn will be used to gain new insights into long GRBs and CCSNe in general.
This research will also make a direct contribution to cosmic explosions research in general. Any observable cosmic explosion must go through a shock breakout and a considerable effort is invested these days in large field of view surveys in search for these breakouts. This program will provide a new theoretical base for the interpretation of the upcoming observations.
Summary
Long gamma ray bursts (long GRBs) and core-collapse supernovae (CCSNe) are two of the most spectacular explosions in the Universe. They are a focal point of research for many reasons. Nevertheless, despite considerable effort during the last several decades, there are still many fundamental open questions regarding their physics.
Long GRBs and CCSNe are related. We know that they are both an outcome of a massive star collapse, where in some cases, such collapse produces simultaneously a GRB and a SN. However, we do not know how a single stellar collapse can produce these two apparently very different explosions. The GRB-SN connection raises many questions, but it also offers new opportunities to learn on the two types of explosions.
The focus of the proposed research is on the connection between CCSNe and GRBs, and on the physics of shock breakout. As I explain in this proposal, shock breakouts play an important role in this connection and therefore, I will develop a comprehensive theory of relativistic and Newtonian shock breakout. In addition, I will study the propagation of relativistic jets inside stars, including the effects of jet propagation and GRB engine on the emerging SN. This will be done by a set of interrelated projects that carefully combine analytic calculations and numerical simulations. Together, these projects will be the first to model a GRB and a SN that are simultaneously produced in a single star. This in turn will be used to gain new insights into long GRBs and CCSNe in general.
This research will also make a direct contribution to cosmic explosions research in general. Any observable cosmic explosion must go through a shock breakout and a considerable effort is invested these days in large field of view surveys in search for these breakouts. This program will provide a new theoretical base for the interpretation of the upcoming observations.
Max ERC Funding
1 468 180 €
Duration
Start date: 2012-01-01, End date: 2017-12-31
Project acronym GRBS
Project Gamma Ray Bursts as a Focal Point of High Energy Astrophysics
Researcher (PI) Tsvi Piran
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE9, ERC-2008-AdG
Summary Gamma-Ray Bursts (GRBs), short and intense bursts of gamma-rays originating from random directions in the sky, are the brightest explosions in our Universe. They involve ultra-relativistic motion, huge magnetic fields, the strongest gravitational fields, acceleration of photons, neutrinos and cosmic rays to ultra high energies, the collapse of massive stars, mergers of neutron star binaries and formation of newborn black holes. They are at the focal point of relativistic high energy astrophysics and they serve as the best laboratory for extreme physics. The internal-external shocks model was formulated to explain their inner working. This model had impressive successes in interpreting and predicting GRB properties. Still it had left many fundamental questions unanswered. Furthermore, recently it has been confronted with puzzling Swift observations of the early afterglow and it is not clear if it needs minor revisions or a drastic overhaul. I describe here an extensive research program that deals with practically all aspects of GRB. From a technical point of view this program involves sophisticated state of the art computations on one hand, fundamental theory and phenomenological analysis of observations and data analysis on the other one. My goal is to address both old and new open question, considering, among other options the possibility that the current model has to be drastically revised. My long term goal, beyond understanding the inner working of GRBs, is to create a unified theory of accretion acceleration and collimation and of emission of high energy gamma-rays and relativistic particles that will synergize our understanding of GRBs, AGNs, Microquasars, galactic binary black holes SNRs and other high energy astrophysics phenomena. A second hope is to find ways to utilize GRBs to reveal new physics that cannot be explored otherwise.
Summary
Gamma-Ray Bursts (GRBs), short and intense bursts of gamma-rays originating from random directions in the sky, are the brightest explosions in our Universe. They involve ultra-relativistic motion, huge magnetic fields, the strongest gravitational fields, acceleration of photons, neutrinos and cosmic rays to ultra high energies, the collapse of massive stars, mergers of neutron star binaries and formation of newborn black holes. They are at the focal point of relativistic high energy astrophysics and they serve as the best laboratory for extreme physics. The internal-external shocks model was formulated to explain their inner working. This model had impressive successes in interpreting and predicting GRB properties. Still it had left many fundamental questions unanswered. Furthermore, recently it has been confronted with puzzling Swift observations of the early afterglow and it is not clear if it needs minor revisions or a drastic overhaul. I describe here an extensive research program that deals with practically all aspects of GRB. From a technical point of view this program involves sophisticated state of the art computations on one hand, fundamental theory and phenomenological analysis of observations and data analysis on the other one. My goal is to address both old and new open question, considering, among other options the possibility that the current model has to be drastically revised. My long term goal, beyond understanding the inner working of GRBs, is to create a unified theory of accretion acceleration and collimation and of emission of high energy gamma-rays and relativistic particles that will synergize our understanding of GRBs, AGNs, Microquasars, galactic binary black holes SNRs and other high energy astrophysics phenomena. A second hope is to find ways to utilize GRBs to reveal new physics that cannot be explored otherwise.
Max ERC Funding
1 933 460 €
Duration
Start date: 2009-01-01, End date: 2014-12-31
Project acronym GUIDEDNW
Project Guided Nanowires: From Growth Mechanism to
Self-Integrating Nanosystems
Researcher (PI) Pablo Ernesto Joselevich Fingermann
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE5, ERC-2013-ADG
Summary The large-scale assembly of nanowires (NWs) with controlled orientation on surfaces remains one challenge toward their integration into practical devices. A recent paper in Science from the PI’s group reported the guided growth of millimeter-long horizontal NWs with controlled orientations on crystal surfaces. The growth directions and crystallographic orientation of GaN NWs are controlled by their epitaxial relationship with different planes of sapphire, as well as by a graphoepitaxial effect that guides their growth along surface steps and grooves. Despite their interaction with the surface, these horizontally grown NWs have surprisingly few defects, exhibiting optical and electronic properties superior to those of vertically grown NWs. We observed that whereas in a 2D film stress accumulates in two directions, in a NW stress accumulates along its axis, but can relax in the transversal direction, making the 1D system much more tolerant to mismatch than a 2D film. This new 1D nanoscale effect, along with the graphoepitaxial effect, subverts the paradigm not only in the young field of NWs, but also in the established field of epitaxy. This paves the way to highly controlled semiconductor structures with potential applications not available by other means.
The aim of this project is to investigate the guided growth of NWs and unleash its vast possibilities toward the realization of self-integrating nanosystems.
First, we will generalize the guided growth of NWs to a variety of semiconductors and substrates, and produce ordered arrays of NWs with coherently modulated composition and doping.
Second, we will conduct fundamental studies to investigate the correlated structure, growth mechanism, optical and electronic properties of guided NWs.
Third, we will exploit the guided growth of NWs for the production of various functional self-integrating systems, including nanocircuits, LEDs, lasers, photovoltaic cells, photodetectors, photonic and nonlinear optical devices.
Summary
The large-scale assembly of nanowires (NWs) with controlled orientation on surfaces remains one challenge toward their integration into practical devices. A recent paper in Science from the PI’s group reported the guided growth of millimeter-long horizontal NWs with controlled orientations on crystal surfaces. The growth directions and crystallographic orientation of GaN NWs are controlled by their epitaxial relationship with different planes of sapphire, as well as by a graphoepitaxial effect that guides their growth along surface steps and grooves. Despite their interaction with the surface, these horizontally grown NWs have surprisingly few defects, exhibiting optical and electronic properties superior to those of vertically grown NWs. We observed that whereas in a 2D film stress accumulates in two directions, in a NW stress accumulates along its axis, but can relax in the transversal direction, making the 1D system much more tolerant to mismatch than a 2D film. This new 1D nanoscale effect, along with the graphoepitaxial effect, subverts the paradigm not only in the young field of NWs, but also in the established field of epitaxy. This paves the way to highly controlled semiconductor structures with potential applications not available by other means.
The aim of this project is to investigate the guided growth of NWs and unleash its vast possibilities toward the realization of self-integrating nanosystems.
First, we will generalize the guided growth of NWs to a variety of semiconductors and substrates, and produce ordered arrays of NWs with coherently modulated composition and doping.
Second, we will conduct fundamental studies to investigate the correlated structure, growth mechanism, optical and electronic properties of guided NWs.
Third, we will exploit the guided growth of NWs for the production of various functional self-integrating systems, including nanocircuits, LEDs, lasers, photovoltaic cells, photodetectors, photonic and nonlinear optical devices.
Max ERC Funding
2 063 872 €
Duration
Start date: 2014-02-01, End date: 2019-01-31
Project acronym HARMONIC
Project Studies in Harmonic Analysis and Discrete Geometry: Tilings, Spectra and Quasicrystals
Researcher (PI) Nir Lev
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2016-STG
Summary This proposal is concerned with several themes which lie in the crossroads of Harmonic Analysis and Discrete Geometry. Harmonic Analysis is fundamental in all areas of science and engineering, and has vast applications in most branches of mathematics. Discrete Geometry deals with some of the most natural and beautiful problems in mathematics, which often turn out to be also very deep and difficult in spite of their apparent simplicity. The proposed project deals with some fundamental problems which involve an interplay between these two important disciplines.
One theme of the project deals with tilings of the Euclidean space by translations, and the interaction of this subject with questions in orthogonal harmonic analysis. The PI has recently developed an approach to attack some problems in connection with the famous conjecture due to Fuglede (1974), concerning the characterization of domains which admit orthogonal Fourier bases in terms of their possibility to tile the space by translations, and in relation with the theory of multiple tiling by translates of a convex polytope, or by a function. A main goal of this project is to further develop new methods and extend some promising intermediate results obtained by the PI in these directions.
Another theme of the proposed research lies in the mathematical theory of quasicrystals. This area has received a lot of attention since the experimental discovery in the 1980's of the physical quasicrystals, namely, of non-periodic atomic structures with diffraction patterns consisting of spots. Recently, by a combination of harmonic analytic and discrete combinatorial methods, the PI was able to answer some long-standing questions of Lagarias (2000) concerning the geometry and structure of these rigid point configurations. In the present project, the PI intends to continue the investigation in the mathematical theory of quasicrystals, and to analyze some basic problems which are still open in this field.
Summary
This proposal is concerned with several themes which lie in the crossroads of Harmonic Analysis and Discrete Geometry. Harmonic Analysis is fundamental in all areas of science and engineering, and has vast applications in most branches of mathematics. Discrete Geometry deals with some of the most natural and beautiful problems in mathematics, which often turn out to be also very deep and difficult in spite of their apparent simplicity. The proposed project deals with some fundamental problems which involve an interplay between these two important disciplines.
One theme of the project deals with tilings of the Euclidean space by translations, and the interaction of this subject with questions in orthogonal harmonic analysis. The PI has recently developed an approach to attack some problems in connection with the famous conjecture due to Fuglede (1974), concerning the characterization of domains which admit orthogonal Fourier bases in terms of their possibility to tile the space by translations, and in relation with the theory of multiple tiling by translates of a convex polytope, or by a function. A main goal of this project is to further develop new methods and extend some promising intermediate results obtained by the PI in these directions.
Another theme of the proposed research lies in the mathematical theory of quasicrystals. This area has received a lot of attention since the experimental discovery in the 1980's of the physical quasicrystals, namely, of non-periodic atomic structures with diffraction patterns consisting of spots. Recently, by a combination of harmonic analytic and discrete combinatorial methods, the PI was able to answer some long-standing questions of Lagarias (2000) concerning the geometry and structure of these rigid point configurations. In the present project, the PI intends to continue the investigation in the mathematical theory of quasicrystals, and to analyze some basic problems which are still open in this field.
Max ERC Funding
1 260 625 €
Duration
Start date: 2016-12-01, End date: 2021-11-30
Project acronym HARMONIC
Project Discrete harmonic analysis for computer science
Researcher (PI) Yuval FILMUS
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory.
Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science.
The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Summary
Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory.
Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science.
The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Max ERC Funding
1 473 750 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym HAS
Project Harmonic Analysis and l-adic sheaves
Researcher (PI) David Kazhdan
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2014-ADG
Summary "In recent years there has been impressive development of the higher category theory and in particular development of the categorical counterpart of the Langlands conjecture over fields of finite characteristic. But until now, this development has had little bearing on the classical problems which deal with spaces of functions. The main goal of this proposal is to build the technique to apply the category theory to classical problems. Of course on the way I will have to deal with problems in the categorical realm.
The first part of the proposal deals with construction of characters of irreducible representations of reductive groups over local nonarchimedian fields F in terms of traces of the Frobenious endomorphisms which should lead to the proof of the ""Stable center conjecture"" at least for representations of depth zero.
The second part is on the extension of the definition of L-functions of representations of reductive F-groups corresponding to an arbitrary representation of the dual groups. As it is now, the definition is known only for very special representations of the dual group and only in the case of classical groups.
The third part is on the extension of the classical theory to representations of Kac-Moody groups over local fields."
Summary
"In recent years there has been impressive development of the higher category theory and in particular development of the categorical counterpart of the Langlands conjecture over fields of finite characteristic. But until now, this development has had little bearing on the classical problems which deal with spaces of functions. The main goal of this proposal is to build the technique to apply the category theory to classical problems. Of course on the way I will have to deal with problems in the categorical realm.
The first part of the proposal deals with construction of characters of irreducible representations of reductive groups over local nonarchimedian fields F in terms of traces of the Frobenious endomorphisms which should lead to the proof of the ""Stable center conjecture"" at least for representations of depth zero.
The second part is on the extension of the definition of L-functions of representations of reductive F-groups corresponding to an arbitrary representation of the dual groups. As it is now, the definition is known only for very special representations of the dual group and only in the case of classical groups.
The third part is on the extension of the classical theory to representations of Kac-Moody groups over local fields."
Max ERC Funding
1 569 488 €
Duration
Start date: 2015-10-01, End date: 2020-09-30
Project acronym HD-App
Project New horizons in homogeneous dynamics and its applications
Researcher (PI) Uri SHAPIRA
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE1, ERC-2017-STG
Summary We present a large variety of novel lines of research in Homogeneous Dynamics with emphasis on the dynamics of the diagonal group. Both new and classical applications are suggested, most notably to
• Number Theory
• Geometry of Numbers
• Diophantine approximation.
Emphasis is given to applications in
• Diophantine properties of algebraic numbers.
The proposal is built of 4 sections.
(1) In the first section we discuss questions pertaining to topological and distributional aspects of periodic orbits of the diagonal group in the space of lattices in Euclidean space. These objects encode deep information regarding Diophantine properties of algebraic numbers. We demonstrate how these questions are closely related to, and may help solve, some of the central open problems in the geometry of numbers and Diophantine approximation.
(2) In the second section we discuss Minkowski's conjecture regarding integral values of products of linear forms. For over a century this central conjecture is resisting a general solution and a novel and promising strategy for its resolution is presented.
(3) In the third section, a novel conjecture regarding limiting distribution of infinite-volume-orbits is presented, in analogy with existing results regarding finite-volume-orbits. Then, a variety of applications and special cases are discussed, some of which give new results regarding classical concepts such as continued fraction expansion of rational numbers.
(4) In the last section we suggest a novel strategy to attack one of the most notorious open problems in Diophantine approximation, namely: Do cubic numbers have unbounded continued fraction expansion? This novel strategy leads us to embark on a systematic study of an area in homogeneous dynamics which has not been studied yet. Namely, the dynamics in the space of discrete subgroups of rank k in R^n (identified up to scaling).
Summary
We present a large variety of novel lines of research in Homogeneous Dynamics with emphasis on the dynamics of the diagonal group. Both new and classical applications are suggested, most notably to
• Number Theory
• Geometry of Numbers
• Diophantine approximation.
Emphasis is given to applications in
• Diophantine properties of algebraic numbers.
The proposal is built of 4 sections.
(1) In the first section we discuss questions pertaining to topological and distributional aspects of periodic orbits of the diagonal group in the space of lattices in Euclidean space. These objects encode deep information regarding Diophantine properties of algebraic numbers. We demonstrate how these questions are closely related to, and may help solve, some of the central open problems in the geometry of numbers and Diophantine approximation.
(2) In the second section we discuss Minkowski's conjecture regarding integral values of products of linear forms. For over a century this central conjecture is resisting a general solution and a novel and promising strategy for its resolution is presented.
(3) In the third section, a novel conjecture regarding limiting distribution of infinite-volume-orbits is presented, in analogy with existing results regarding finite-volume-orbits. Then, a variety of applications and special cases are discussed, some of which give new results regarding classical concepts such as continued fraction expansion of rational numbers.
(4) In the last section we suggest a novel strategy to attack one of the most notorious open problems in Diophantine approximation, namely: Do cubic numbers have unbounded continued fraction expansion? This novel strategy leads us to embark on a systematic study of an area in homogeneous dynamics which has not been studied yet. Namely, the dynamics in the space of discrete subgroups of rank k in R^n (identified up to scaling).
Max ERC Funding
1 432 730 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym HI-DIM COMBINATORICS
Project High-dimensional combinatorics
Researcher (PI) Nathan Linial
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2013-ADG
Summary This research program originates from a pressing practical need and from a purely new geometric perspective of discrete mathematics..
Graphs play a key role in many application areas of mathematics, providing the perfect mathematical description of all systems that are governed by pairwise interactions, in computer science, economics, biology and more. But graphs cannot fully capture scenarios in which interactions involve more than two agents. Since the theory of hypergraphs is still too under-developed, we resort to geometry and topology, which view a graph as a one-dimensional simplicial complex. I want to develop a combinatorial/geometric/probabilistic theory of higher-dimensional simplicial complexes. Inspired by the great success of random graph theory and its impact on discrete mathematics both theoretical and applied, I intend to develop a theory of random simplicial complexes.
This combinatorial/geometric point of view and the novel high-dimensional perspective, shed new light on many fundamental combinatorial objects such as permutations, cycles and trees. We show that they all have high-dimensional analogs whose study leads to new deep mathematical problems. This holds a great promise for real-world applications, in view of the prevalence of such objects in application domains.
Even basic aspects of graphs, permutations etc. are much more sophisticated and subtle in high dimensions. E.g., it is a key result that randomly evolving graphs undergo a phase transition and a sudden emergence of a giant component. Computer simulations of the evolution of higher-dimensional simplicial complexes, reveal an even more dramatic phase transition. Yet, we still do not even know what is a higher-dimensional giant component.
I also show how to use simplicial complexes (deterministic and random) to construct better error-correcting codes. I suggest a new conceptual approach to the search for high-dimensional expanders, a goal sought by many renowned mathematicians.
Summary
This research program originates from a pressing practical need and from a purely new geometric perspective of discrete mathematics..
Graphs play a key role in many application areas of mathematics, providing the perfect mathematical description of all systems that are governed by pairwise interactions, in computer science, economics, biology and more. But graphs cannot fully capture scenarios in which interactions involve more than two agents. Since the theory of hypergraphs is still too under-developed, we resort to geometry and topology, which view a graph as a one-dimensional simplicial complex. I want to develop a combinatorial/geometric/probabilistic theory of higher-dimensional simplicial complexes. Inspired by the great success of random graph theory and its impact on discrete mathematics both theoretical and applied, I intend to develop a theory of random simplicial complexes.
This combinatorial/geometric point of view and the novel high-dimensional perspective, shed new light on many fundamental combinatorial objects such as permutations, cycles and trees. We show that they all have high-dimensional analogs whose study leads to new deep mathematical problems. This holds a great promise for real-world applications, in view of the prevalence of such objects in application domains.
Even basic aspects of graphs, permutations etc. are much more sophisticated and subtle in high dimensions. E.g., it is a key result that randomly evolving graphs undergo a phase transition and a sudden emergence of a giant component. Computer simulations of the evolution of higher-dimensional simplicial complexes, reveal an even more dramatic phase transition. Yet, we still do not even know what is a higher-dimensional giant component.
I also show how to use simplicial complexes (deterministic and random) to construct better error-correcting codes. I suggest a new conceptual approach to the search for high-dimensional expanders, a goal sought by many renowned mathematicians.
Max ERC Funding
1 754 600 €
Duration
Start date: 2013-10-01, End date: 2018-09-30
Project acronym HIEXP
Project High Dimensional Expanders, Ramanujan Complexes and Codes
Researcher (PI) Alex LUBOTZKY
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2015-AdG
Summary "Expander graphs have been playing a fundamental role in many areas of computer science. During the last 15 years they have also found important and unexpected applications in pure mathematics. The goal of the current research is to develop systematically high-dimensional (HD) theory of expanders, i.e., simplicial complexes and hypergraphs which resemble in dimension d, the role of expander graphs for d = 1. There are several motivations for developing such a theory, some from pure mathematics and some from computer science. For example, Ramanujan complexes (the HD versions of the ""optimal"" expanders, the Ramanujan graphs) have already been useful for extremal hypergraph theory. One of the main goals of this research is to use them to solve other problems, such as Gromov's problem: are there bounded degree simplicial complexes with the topological overlapping property (""topological expanders""). Other directions of HD expanders have applications in property testing, a very important subject in theoretical computer science. Moreover they can be a tool for the construction of locally testable codes, an important question of theoretical and practical importance in the theory of error correcting codes. In addition, the study of these simplicial complexes suggests new quantum error correcting codes (QECC). It is hoped that it will lead to such codes which are also low density parity check (LDPC). The huge success and impact of the theory of expander graphs suggests that the high dimensional theory will also bring additional unexpected applications beside those which can be foreseen as of now."
Summary
"Expander graphs have been playing a fundamental role in many areas of computer science. During the last 15 years they have also found important and unexpected applications in pure mathematics. The goal of the current research is to develop systematically high-dimensional (HD) theory of expanders, i.e., simplicial complexes and hypergraphs which resemble in dimension d, the role of expander graphs for d = 1. There are several motivations for developing such a theory, some from pure mathematics and some from computer science. For example, Ramanujan complexes (the HD versions of the ""optimal"" expanders, the Ramanujan graphs) have already been useful for extremal hypergraph theory. One of the main goals of this research is to use them to solve other problems, such as Gromov's problem: are there bounded degree simplicial complexes with the topological overlapping property (""topological expanders""). Other directions of HD expanders have applications in property testing, a very important subject in theoretical computer science. Moreover they can be a tool for the construction of locally testable codes, an important question of theoretical and practical importance in the theory of error correcting codes. In addition, the study of these simplicial complexes suggests new quantum error correcting codes (QECC). It is hoped that it will lead to such codes which are also low density parity check (LDPC). The huge success and impact of the theory of expander graphs suggests that the high dimensional theory will also bring additional unexpected applications beside those which can be foreseen as of now."
Max ERC Funding
1 592 500 €
Duration
Start date: 2016-08-01, End date: 2021-07-31
Project acronym HIPS
Project High-Performance Secure Computation with Applications to Privacy and Cloud Security
Researcher (PI) Yehuda Lindell
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary "Secure two-party and multiparty computation has long stood at the center of the foundations of theoretical cryptography. However, in the last five years there has been blistering progress on the question of efficient secure computation. We are close to the stage that secure computation can be applied to real-world privacy and security problems. There is thus considerable interest in secure computation solutions from governments, military and security organisations, and industry. However, in order to answer the needs of secure computation in practice, there is still a need to make secure computation protocols much faster.
Until now, research in efficient cryptographic protocols has typically been in two different directions. The first direction, and the major one, is to construct more efficient protocols and prove them secure, where efficiency is measured by the amount of communication sent, the number of heavy cryptographic operations carried out (e.g., exponentiations), and so on. The second direction is to take the state-of-the-art protocols and implement them while optimising the implementation based on systems concerns. This latter direction has proven to improve the efficiency of existing protocols significantly, but is limited since it remains within the constraints of existing cryptographic approaches.
We propose a synergetic approach towards achieving high-performance secure computation. We will design new protocols while combining research from cryptography, algorithms and systems. In this way, issues like load balancing, memory management, cache-awareness, bandwidth bottlenecks, utilisation of parallel computing resources, and more, will be built into the cryptographic protocol and not considered merely as an afterthought. If successful, HIPS will enable the application of the beautiful theory of secure computation to the problems of privacy in the digital era, cloud security and more."
Summary
"Secure two-party and multiparty computation has long stood at the center of the foundations of theoretical cryptography. However, in the last five years there has been blistering progress on the question of efficient secure computation. We are close to the stage that secure computation can be applied to real-world privacy and security problems. There is thus considerable interest in secure computation solutions from governments, military and security organisations, and industry. However, in order to answer the needs of secure computation in practice, there is still a need to make secure computation protocols much faster.
Until now, research in efficient cryptographic protocols has typically been in two different directions. The first direction, and the major one, is to construct more efficient protocols and prove them secure, where efficiency is measured by the amount of communication sent, the number of heavy cryptographic operations carried out (e.g., exponentiations), and so on. The second direction is to take the state-of-the-art protocols and implement them while optimising the implementation based on systems concerns. This latter direction has proven to improve the efficiency of existing protocols significantly, but is limited since it remains within the constraints of existing cryptographic approaches.
We propose a synergetic approach towards achieving high-performance secure computation. We will design new protocols while combining research from cryptography, algorithms and systems. In this way, issues like load balancing, memory management, cache-awareness, bandwidth bottlenecks, utilisation of parallel computing resources, and more, will be built into the cryptographic protocol and not considered merely as an afterthought. If successful, HIPS will enable the application of the beautiful theory of secure computation to the problems of privacy in the digital era, cloud security and more."
Max ERC Funding
1 999 175 €
Duration
Start date: 2014-10-01, End date: 2019-09-30
Project acronym HOLI
Project Deep Learning for Holistic Inference
Researcher (PI) Amir Globerson
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Summary
Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Max ERC Funding
1 932 500 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym HomDyn
Project Homogenous dynamics, arithmetic and equidistribution
Researcher (PI) Elon Lindenstrauss
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups,
and propose to tackle a wide range of problems in the area, including the central open problems.
One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$.
A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term.
One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics.
Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Summary
We consider the dynamics of actions on homogeneous spaces of algebraic groups,
and propose to tackle a wide range of problems in the area, including the central open problems.
One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$.
A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term.
One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics.
Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Max ERC Funding
2 090 625 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym HQMAT
Project New Horizons in Quantum Matter: From Critical Fluids to High Temperature Superconductivity
Researcher (PI) Erez BERG
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE3, ERC-2018-COG
Summary Understanding the low-temperature behavior of quantum correlated materials has long been one of the central challenges in condensed matter physics. Such materials exhibit a number of interesting phenomena, such as anomalous transport behavior, complex phase diagrams, and high-temperature superconductivity. However, their understanding has been hindered by the lack of suitable theoretical tools to handle such strongly interacting quantum ``liquids.''
Recent years have witnessed a wave of renewed interest in this long-standing, deep problem, both from condensed matter, high energy, and quantum information physicists. The goal of this research program is to exploit the recent progress on these problems to open new ways of understanding strongly-coupled unconventional quantum fluids. We will perform large-scale, sign problem-free QMC simulations of metals close to quantum critical points, focusing on new regimes beyond the traditional paradigms. New ways to diagnose transport from QMC data will be developed. Exotic phase transitions between an ordinary and a topologically-ordered, fractionalized metal will be studied. In addition, insights will be gained from analytical studies of strongly coupled lattice models, starting from the tractable limit of a large number of degrees of freedom per unit cell. The thermodynamic and transport properties of these models will be studied. These solvable examples will be used to provide a new window into the properties of strongly coupled quantum matter. We will seek ``organizing principles'' to describe such matter, such as emergent local quantum critical behavior and a hydrodynamic description of electron flow. Connections will be made with the ideas of universal bounds on transport and on the rate of spread of quantum information, as well as with insights from other techniques. While our study will mostly focus on generic, universal features of quantum fluids, implications for specific materials will also be studied.
Summary
Understanding the low-temperature behavior of quantum correlated materials has long been one of the central challenges in condensed matter physics. Such materials exhibit a number of interesting phenomena, such as anomalous transport behavior, complex phase diagrams, and high-temperature superconductivity. However, their understanding has been hindered by the lack of suitable theoretical tools to handle such strongly interacting quantum ``liquids.''
Recent years have witnessed a wave of renewed interest in this long-standing, deep problem, both from condensed matter, high energy, and quantum information physicists. The goal of this research program is to exploit the recent progress on these problems to open new ways of understanding strongly-coupled unconventional quantum fluids. We will perform large-scale, sign problem-free QMC simulations of metals close to quantum critical points, focusing on new regimes beyond the traditional paradigms. New ways to diagnose transport from QMC data will be developed. Exotic phase transitions between an ordinary and a topologically-ordered, fractionalized metal will be studied. In addition, insights will be gained from analytical studies of strongly coupled lattice models, starting from the tractable limit of a large number of degrees of freedom per unit cell. The thermodynamic and transport properties of these models will be studied. These solvable examples will be used to provide a new window into the properties of strongly coupled quantum matter. We will seek ``organizing principles'' to describe such matter, such as emergent local quantum critical behavior and a hydrodynamic description of electron flow. Connections will be made with the ideas of universal bounds on transport and on the rate of spread of quantum information, as well as with insights from other techniques. While our study will mostly focus on generic, universal features of quantum fluids, implications for specific materials will also be studied.
Max ERC Funding
1 515 400 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym HydraMechanics
Project Mechanical Aspects of Hydra Morphogenesis
Researcher (PI) Kinneret Magda KEREN
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Consolidator Grant (CoG), PE3, ERC-2018-COG
Summary Morphogenesis is one of the most remarkable examples of biological pattern formation. Despite substantial progress in the field, we still do not understand the organizational principles responsible for the robust convergence of the morphogenesis process, across scales, to form viable organisms under variable conditions. We focus here on the less-studied mechanical aspects of this problem, and aim to uncover how mechanical forces and feedback contribute to the formation and stabilization of the body plan. Regenerating Hydra offer a powerful platform to explore this direction, thanks to their simple body plan, extraordinary regeneration capabilities, and the accessibility and flexibility of their tissues. We propose to follow the regeneration of excised tissue segments, which inherit an aligned supra-cellular cytoskeletal organization from the parent Hydra, as well as cell aggregates, which lack any prior organization. We will employ advanced microscopy techniques and develop elaborate image analysis tools to track cytoskeletal organization and collective cell migration and correlate them with global tissue morphology, from the onset of regeneration all the way to the formation of complete animals. Furthermore, to directly probe the influence of mechanics on Hydra morphogenesis, we propose to apply various mechanical perturbations, and intervene with the axis formation process using external forces and mechanical constraints. Overall, the proposed work seeks to develop an effective phenomenological description of morphogenesis during Hydra regeneration, at the level of cells and tissues, and reveal the mechanical basis of this process. More generally, our research will shed light on the role of mechanics in animal morphogenesis, and inspire new approaches for using external forces to direct tissue engineering and advance regenerative medicine.
Summary
Morphogenesis is one of the most remarkable examples of biological pattern formation. Despite substantial progress in the field, we still do not understand the organizational principles responsible for the robust convergence of the morphogenesis process, across scales, to form viable organisms under variable conditions. We focus here on the less-studied mechanical aspects of this problem, and aim to uncover how mechanical forces and feedback contribute to the formation and stabilization of the body plan. Regenerating Hydra offer a powerful platform to explore this direction, thanks to their simple body plan, extraordinary regeneration capabilities, and the accessibility and flexibility of their tissues. We propose to follow the regeneration of excised tissue segments, which inherit an aligned supra-cellular cytoskeletal organization from the parent Hydra, as well as cell aggregates, which lack any prior organization. We will employ advanced microscopy techniques and develop elaborate image analysis tools to track cytoskeletal organization and collective cell migration and correlate them with global tissue morphology, from the onset of regeneration all the way to the formation of complete animals. Furthermore, to directly probe the influence of mechanics on Hydra morphogenesis, we propose to apply various mechanical perturbations, and intervene with the axis formation process using external forces and mechanical constraints. Overall, the proposed work seeks to develop an effective phenomenological description of morphogenesis during Hydra regeneration, at the level of cells and tissues, and reveal the mechanical basis of this process. More generally, our research will shed light on the role of mechanics in animal morphogenesis, and inspire new approaches for using external forces to direct tissue engineering and advance regenerative medicine.
Max ERC Funding
2 000 000 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym HYDRATIONLUBE
Project Hydration lubrication: exploring a new paradigm
Researcher (PI) Jacob Klein
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE4, ERC-2009-AdG
Summary In recent years, as first established in some 6 papers in Science and Nature from the PI s group, a new paradigm has emerged. This reveals the remarkable and unsuspected - role of hydration layers in modulating frictional forces between sliding surfaces or molecular layers in aqueous media, termed hydration lubrication, in which the lubricating mode is completely different from the classic one of oils or surfactants. In this project we address the substantial challenges that have now arisen: what are the underlying mechanisms controlling this effect? what are the potential breakthroughs that it may lead to? We will answer these questions through several interrelated objectives designed to address both fundamental aspects, as well as limits of applicability. We will use surface force balance (SFB) experiments, for which we will develop new methodologies, to characterize normal and frictional forces between atomically smooth surfaces where the nature of the surfaces (hydrophilic, hydrophobic, metallic, polymeric), as well as their electric potential, may be independently varied. We will examine mono- and multivalent ions to establish the role of relaxation rates and hydration energies in controlling the hydration lubrication, will probe hydration interactions at both hydrophobic/hydrophilic surfaces and will monitor slip of hydrated ions past surfaces. We will also characterize the hydration lubrication properties of a wide range of novel surface systems, including surfactants, liposomes, polymer brushes and, importantly, liposomes, using also synchrotron X-ray reflectometry for structural information. Attainment of these objectives should lead to conceptual breakthroughs both in our understanding of this new paradigm, and for its practical implications.
Summary
In recent years, as first established in some 6 papers in Science and Nature from the PI s group, a new paradigm has emerged. This reveals the remarkable and unsuspected - role of hydration layers in modulating frictional forces between sliding surfaces or molecular layers in aqueous media, termed hydration lubrication, in which the lubricating mode is completely different from the classic one of oils or surfactants. In this project we address the substantial challenges that have now arisen: what are the underlying mechanisms controlling this effect? what are the potential breakthroughs that it may lead to? We will answer these questions through several interrelated objectives designed to address both fundamental aspects, as well as limits of applicability. We will use surface force balance (SFB) experiments, for which we will develop new methodologies, to characterize normal and frictional forces between atomically smooth surfaces where the nature of the surfaces (hydrophilic, hydrophobic, metallic, polymeric), as well as their electric potential, may be independently varied. We will examine mono- and multivalent ions to establish the role of relaxation rates and hydration energies in controlling the hydration lubrication, will probe hydration interactions at both hydrophobic/hydrophilic surfaces and will monitor slip of hydrated ions past surfaces. We will also characterize the hydration lubrication properties of a wide range of novel surface systems, including surfactants, liposomes, polymer brushes and, importantly, liposomes, using also synchrotron X-ray reflectometry for structural information. Attainment of these objectives should lead to conceptual breakthroughs both in our understanding of this new paradigm, and for its practical implications.
Max ERC Funding
2 304 180 €
Duration
Start date: 2010-05-01, End date: 2015-04-30
Project acronym iEXTRACT
Project Information Extraction for Everyone
Researcher (PI) Yoav Goldberg
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward.
Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language.
I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query
unstructured texts via easily structured forms.
More specifically, I aim develop:
(a) a novel structured representation that abstracts much of the complexity of natural language;
(b) algorithms that derive these representations from texts;
(c) an accessible rule language to query this representation;
(d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention.
The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of
domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and
advance their profession.
Summary
Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward.
Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language.
I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query
unstructured texts via easily structured forms.
More specifically, I aim develop:
(a) a novel structured representation that abstracts much of the complexity of natural language;
(b) algorithms that derive these representations from texts;
(c) an accessible rule language to query this representation;
(d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention.
The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of
domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and
advance their profession.
Max ERC Funding
1 499 354 €
Duration
Start date: 2019-05-01, End date: 2024-04-30