Project acronym DYNA-MIC
Project Deep non-invasive imaging via scattered-light acoustically-mediated computational microscopy
Researcher (PI) Ori Katz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE7, ERC-2015-STG
Summary Optical microscopy, perhaps the most important tool in biomedical investigation and clinical diagnostics, is currently held back by the assumption that it is not possible to noninvasively image microscopic structures more than a fraction of a millimeter deep inside tissue. The governing paradigm is that high-resolution information carried by light is lost due to random scattering in complex samples such as tissue. While non-optical imaging techniques, employing non-ionizing radiation such as ultrasound, allow deeper investigations, they possess drastically inferior resolution and do not permit microscopic studies of cellular structures, crucial for accurate diagnosis of cancer and other diseases.
I propose a new kind of microscope, one that can peer deep inside visually opaque samples, combining the sub-micron resolution of light with the penetration depth of ultrasound. My novel approach is based on our discovery that information on microscopic structures is contained in random scattered-light patterns. It breaks current limits by exploiting the randomness of scattered light rather than struggling to fight it.
We will transform this concept into a breakthrough imaging platform by combining ultrasonic probing and modulation of light with advanced digital signal processing algorithms, extracting the hidden microscopic structure by two complementary approaches: 1) By exploiting the stochastic dynamics of scattered light using methods developed to surpass the diffraction limit in optical nanoscopy and for compressive sampling, harnessing nonlinear effects. 2) Through the analysis of intrinsic correlations in scattered light that persist deep inside scattering tissue.
This proposal is formed by bringing together novel insights on the physics of light in complex media, advanced microscopy techniques, and ultrasound-mediated imaging. It is made possible by the new ability to digitally process vast amounts of scattering data, and has the potential to impact many fields.
Summary
Optical microscopy, perhaps the most important tool in biomedical investigation and clinical diagnostics, is currently held back by the assumption that it is not possible to noninvasively image microscopic structures more than a fraction of a millimeter deep inside tissue. The governing paradigm is that high-resolution information carried by light is lost due to random scattering in complex samples such as tissue. While non-optical imaging techniques, employing non-ionizing radiation such as ultrasound, allow deeper investigations, they possess drastically inferior resolution and do not permit microscopic studies of cellular structures, crucial for accurate diagnosis of cancer and other diseases.
I propose a new kind of microscope, one that can peer deep inside visually opaque samples, combining the sub-micron resolution of light with the penetration depth of ultrasound. My novel approach is based on our discovery that information on microscopic structures is contained in random scattered-light patterns. It breaks current limits by exploiting the randomness of scattered light rather than struggling to fight it.
We will transform this concept into a breakthrough imaging platform by combining ultrasonic probing and modulation of light with advanced digital signal processing algorithms, extracting the hidden microscopic structure by two complementary approaches: 1) By exploiting the stochastic dynamics of scattered light using methods developed to surpass the diffraction limit in optical nanoscopy and for compressive sampling, harnessing nonlinear effects. 2) Through the analysis of intrinsic correlations in scattered light that persist deep inside scattering tissue.
This proposal is formed by bringing together novel insights on the physics of light in complex media, advanced microscopy techniques, and ultrasound-mediated imaging. It is made possible by the new ability to digitally process vast amounts of scattering data, and has the potential to impact many fields.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym FADER
Project Flight Algorithms for Disaggregated Space Architectures
Researcher (PI) Pinchas Pini Gurfil
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE7, ERC-2011-StG_20101014
Summary Standard spacecraft designs comprise modules assembled in a single monolithic structure. When unexpected situations occur, the spacecraft are unable to adequately respond, and significant functional and financial losses are unavoidable. For instance, if the payload of a spacecraft fails, the whole system becomes unserviceable and substitution of the entire spacecraft is required. It would be much easier to replace the payload module only than launch a completely new satellite. This idea gives rise to an emerging concept in space engineering termed disaggregated spacecraft. Disaggregated space architectures (DSA) consist of several physically-separated modules, interacting through wireless communication links to form a single virtual platform. Each module has one or more pre-determined functions: Navigation, attitude control, power generation and payload operation. The free-flying modules, capable of resource sharing, do not have to operate in a tightly-controlled formation, but are rather required to remain in bounded relative position and attitude, termed cluster flying. DSA enables novel space system architectures, which are expected to be much more efficient, adaptable, robust and responsive. The main goal of the proposed research is to develop beyond the state-of-the-art technologies in order to enable operational flight of DSA, by (i) developing algorithms for semi-autonomous long-duration maintenance of a cluster and cluster network, capable of adding and removing spacecraft modules to/from the cluster and cluster network; (ii) finding methods so as to autonomously reconfigure the cluster to retain safety- and mission-critical functionality in the face of network degradation or component failures; (iii) designing semi-autonomous cluster scatter and re-gather maneuvesr to rapidly evade a debris-like threat; and (iv) validating the said algorithms and methods in the Distributed Space Systems Laboratory in which the PI serves as a Principal Investigator.
Summary
Standard spacecraft designs comprise modules assembled in a single monolithic structure. When unexpected situations occur, the spacecraft are unable to adequately respond, and significant functional and financial losses are unavoidable. For instance, if the payload of a spacecraft fails, the whole system becomes unserviceable and substitution of the entire spacecraft is required. It would be much easier to replace the payload module only than launch a completely new satellite. This idea gives rise to an emerging concept in space engineering termed disaggregated spacecraft. Disaggregated space architectures (DSA) consist of several physically-separated modules, interacting through wireless communication links to form a single virtual platform. Each module has one or more pre-determined functions: Navigation, attitude control, power generation and payload operation. The free-flying modules, capable of resource sharing, do not have to operate in a tightly-controlled formation, but are rather required to remain in bounded relative position and attitude, termed cluster flying. DSA enables novel space system architectures, which are expected to be much more efficient, adaptable, robust and responsive. The main goal of the proposed research is to develop beyond the state-of-the-art technologies in order to enable operational flight of DSA, by (i) developing algorithms for semi-autonomous long-duration maintenance of a cluster and cluster network, capable of adding and removing spacecraft modules to/from the cluster and cluster network; (ii) finding methods so as to autonomously reconfigure the cluster to retain safety- and mission-critical functionality in the face of network degradation or component failures; (iii) designing semi-autonomous cluster scatter and re-gather maneuvesr to rapidly evade a debris-like threat; and (iv) validating the said algorithms and methods in the Distributed Space Systems Laboratory in which the PI serves as a Principal Investigator.
Max ERC Funding
1 500 000 €
Duration
Start date: 2011-10-01, End date: 2016-09-30
Project acronym FAFC
Project Foundations and Applications of Functional Cryptography
Researcher (PI) Gil SEGEV
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary "Modern cryptography has successfully followed an ""all-or-nothing"" design paradigm over the years. For example, the most fundamental task of data encryption requires that encrypted data be fully recoverable using the encryption key, but be completely useless without it. Nowadays, however, this paradigm is insufficient for a wide variety of evolving applications, and a more subtle approach is urgently needed. This has recently motivated the cryptography community to put forward a vision of ""functional cryptography'': Designing cryptographic primitives that allow fine-grained access to sensitive data.
This proposal aims at making substantial progress towards realizing the premise of functional cryptography. By tackling challenging key problems in both the foundations and the applications of functional cryptography, I plan to direct the majority of our effort towards addressing the following three fundamental objectives, which span a broad and interdisciplinary flavor of research directions: (1) Obtain a better understanding of functional cryptography's building blocks, (2) develop functional cryptographic tools and schemes based on well-studied assumptions, and (3) increase the usability of functional cryptographic systems via algorithmic techniques.
Realizing the premise of functional cryptography is of utmost importance not only to the development of modern cryptography, but in fact to our entire technological development, where fine-grained access to sensitive data plays an instrumental role. Moreover, our objectives are tightly related to two of the most fundamental open problems in cryptography: Basing cryptography on widely-believed worst-case complexity assumptions, and basing public-key cryptography on private-key primitives. I strongly believe that meaningful progress towards achieving our objectives will shed new light on these key problems, and thus have a significant impact on our understanding of modern cryptography."
Summary
"Modern cryptography has successfully followed an ""all-or-nothing"" design paradigm over the years. For example, the most fundamental task of data encryption requires that encrypted data be fully recoverable using the encryption key, but be completely useless without it. Nowadays, however, this paradigm is insufficient for a wide variety of evolving applications, and a more subtle approach is urgently needed. This has recently motivated the cryptography community to put forward a vision of ""functional cryptography'': Designing cryptographic primitives that allow fine-grained access to sensitive data.
This proposal aims at making substantial progress towards realizing the premise of functional cryptography. By tackling challenging key problems in both the foundations and the applications of functional cryptography, I plan to direct the majority of our effort towards addressing the following three fundamental objectives, which span a broad and interdisciplinary flavor of research directions: (1) Obtain a better understanding of functional cryptography's building blocks, (2) develop functional cryptographic tools and schemes based on well-studied assumptions, and (3) increase the usability of functional cryptographic systems via algorithmic techniques.
Realizing the premise of functional cryptography is of utmost importance not only to the development of modern cryptography, but in fact to our entire technological development, where fine-grained access to sensitive data plays an instrumental role. Moreover, our objectives are tightly related to two of the most fundamental open problems in cryptography: Basing cryptography on widely-believed worst-case complexity assumptions, and basing public-key cryptography on private-key primitives. I strongly believe that meaningful progress towards achieving our objectives will shed new light on these key problems, and thus have a significant impact on our understanding of modern cryptography."
Max ERC Funding
1 307 188 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym FAST FILTERING
Project Fast Filtering for Computer Graphics, Vision and Computational Sciences
Researcher (PI) Raanan Fattal
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE6, ERC-2013-StG
Summary The world of digital signal processing, in particular computer graphics, vision and image processing, use linear and non-linear, explicit and implicit filtering extensively to analyze, process and synthesize images. Given nowadays high-resolution sensors, these operations are often very time consuming and are limited to devices with high-CPU power.
Traditional linear translation-invariant (LTI) transformations, executed using convolution, requires O(N^2) operations. This can be lowered to O(N \log N) via FFT over suitable domains. There are very few sets of filters to which optimal, linear-time, procedures are known. This situation is more complicated in the newly-emerging domain of non-linear spatially-varying filters. Exact application of such filter requires O(N^2) operations and acceleration methods involve higher space dimension introducing severe memory cost and truncation errors.
In this research proposal we intend to derive fast, linear-time, procedures for different types of LTI filters by exploiting a deep connection between convolution, spatially-homogeneous elliptic equations and the multigrid method for solving such equations. Based on this circular connection we draw novel prospects for deriving new multiscale filtering procedures.
A second part of this research proposal is devoted to deriving efficient explicit and implicit non-linear spatially-varying edge-aware filters. One front consists of the derivation of novel multi-level image decomposition that mimics the action of inhomogeneous diffusion operators. The idea here is, once again, to bridge the gap with numerical analysis and use ideas from multiscale matrix preconditioning for the design of new biorthogonal second-generation wavelets.
Moreover, this proposal outlines a new multiscale preconditioning paradigm combining ideas from algebraic multigrid and combinatorial matrix preconditioning. This intermediate approach offers new ways for overcoming fundamental shortcomings in this domain.
Summary
The world of digital signal processing, in particular computer graphics, vision and image processing, use linear and non-linear, explicit and implicit filtering extensively to analyze, process and synthesize images. Given nowadays high-resolution sensors, these operations are often very time consuming and are limited to devices with high-CPU power.
Traditional linear translation-invariant (LTI) transformations, executed using convolution, requires O(N^2) operations. This can be lowered to O(N \log N) via FFT over suitable domains. There are very few sets of filters to which optimal, linear-time, procedures are known. This situation is more complicated in the newly-emerging domain of non-linear spatially-varying filters. Exact application of such filter requires O(N^2) operations and acceleration methods involve higher space dimension introducing severe memory cost and truncation errors.
In this research proposal we intend to derive fast, linear-time, procedures for different types of LTI filters by exploiting a deep connection between convolution, spatially-homogeneous elliptic equations and the multigrid method for solving such equations. Based on this circular connection we draw novel prospects for deriving new multiscale filtering procedures.
A second part of this research proposal is devoted to deriving efficient explicit and implicit non-linear spatially-varying edge-aware filters. One front consists of the derivation of novel multi-level image decomposition that mimics the action of inhomogeneous diffusion operators. The idea here is, once again, to bridge the gap with numerical analysis and use ideas from multiscale matrix preconditioning for the design of new biorthogonal second-generation wavelets.
Moreover, this proposal outlines a new multiscale preconditioning paradigm combining ideas from algebraic multigrid and combinatorial matrix preconditioning. This intermediate approach offers new ways for overcoming fundamental shortcomings in this domain.
Max ERC Funding
1 320 200 €
Duration
Start date: 2013-08-01, End date: 2018-07-31
Project acronym FDP-MBH
Project Fundamental dynamical processes near massive black holes in galactic nuclei
Researcher (PI) Tal Alexander
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE7, ERC-2007-StG
Summary "I propose to combine analytical studies and simulations to explore fundamental open questions in the dynamics and statistical mechanics of stars near massive black holes. These directly affect key issues such as the rate of supply of single and binary stars to the black hole, the growth and evolution of single and binary massive black holes and the connections to the evolution of the host galaxy, capture of stars around the black hole, the rate and modes of gravitational wave emission from captured compact objects, stellar tidal heating and destruction, and the emergence of ""exotic"" stellar populations around massive black holes. These processes have immediate observational implications and relevance in view of the huge amounts of data on massive black holes and galactic nuclei coming from earth-bound and space-borne telescopes, from across the electromagnetic spectrum, from cosmic rays, and in the near future also from neutrinos and gravitational waves."
Summary
"I propose to combine analytical studies and simulations to explore fundamental open questions in the dynamics and statistical mechanics of stars near massive black holes. These directly affect key issues such as the rate of supply of single and binary stars to the black hole, the growth and evolution of single and binary massive black holes and the connections to the evolution of the host galaxy, capture of stars around the black hole, the rate and modes of gravitational wave emission from captured compact objects, stellar tidal heating and destruction, and the emergence of ""exotic"" stellar populations around massive black holes. These processes have immediate observational implications and relevance in view of the huge amounts of data on massive black holes and galactic nuclei coming from earth-bound and space-borne telescopes, from across the electromagnetic spectrum, from cosmic rays, and in the near future also from neutrinos and gravitational waves."
Max ERC Funding
880 000 €
Duration
Start date: 2008-09-01, End date: 2013-08-31
Project acronym FOC
Project Foundations of Cryptographic Hardness
Researcher (PI) Iftach Ilan Haitner
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary A fundamental research challenge in modern cryptography is understanding the necessary hardness assumptions required to build different cryptographic primitives. Attempts to answer this question have gained tremendous success in the last 20-30 years. Most notably, it was shown that many highly complicated primitives can be based on the mere existence of one-way functions (i.e., easy to compute and hard to invert), while other primitives cannot be based on such functions. This research has yielded fundamental tools and concepts such as randomness extractors and computational notions of entropy. Yet many of the most fundamental questions remain unanswered.
Our first goal is to answer the fundamental question of whether cryptography can be based on the assumption that P not equal NP. Our second and third goals are to build a more efficient symmetric-key cryptographic primitives from one-way functions, and to establish effective methods for security amplification of cryptographic primitives. Succeeding in the second and last goals is likely to have great bearing on the way that we construct the very basic cryptographic primitives. A positive answer for the first question will be considered a dramatic result in the cryptography and computational complexity communities.
To address these goals, it is very useful to understand the relationship between different types and quantities of cryptographic hardness. Such understanding typically involves defining and manipulating different types of computational entropy, and comprehending the power of security reductions. We believe that this research will yield new concepts and techniques, with ramification beyond the realm of foundational cryptography.
Summary
A fundamental research challenge in modern cryptography is understanding the necessary hardness assumptions required to build different cryptographic primitives. Attempts to answer this question have gained tremendous success in the last 20-30 years. Most notably, it was shown that many highly complicated primitives can be based on the mere existence of one-way functions (i.e., easy to compute and hard to invert), while other primitives cannot be based on such functions. This research has yielded fundamental tools and concepts such as randomness extractors and computational notions of entropy. Yet many of the most fundamental questions remain unanswered.
Our first goal is to answer the fundamental question of whether cryptography can be based on the assumption that P not equal NP. Our second and third goals are to build a more efficient symmetric-key cryptographic primitives from one-way functions, and to establish effective methods for security amplification of cryptographic primitives. Succeeding in the second and last goals is likely to have great bearing on the way that we construct the very basic cryptographic primitives. A positive answer for the first question will be considered a dramatic result in the cryptography and computational complexity communities.
To address these goals, it is very useful to understand the relationship between different types and quantities of cryptographic hardness. Such understanding typically involves defining and manipulating different types of computational entropy, and comprehending the power of security reductions. We believe that this research will yield new concepts and techniques, with ramification beyond the realm of foundational cryptography.
Max ERC Funding
1 239 838 €
Duration
Start date: 2015-03-01, End date: 2021-02-28
Project acronym FORECASToneMONTH
Project Forecasting Surface Weather and Climate at One-Month Leads through Stratosphere-Troposphere Coupling
Researcher (PI) Chaim Israel Garfinkel
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE10, ERC-2015-STG
Summary Anomalies in surface temperatures, winds, and precipitation can significantly alter energy supply and demand, cause flooding, and cripple transportation networks. Better management of these impacts can be achieved by extending the duration of reliable predictions of the atmospheric circulation.
Polar stratospheric variability can impact surface weather for well over a month, and this proposed research presents a novel approach towards understanding the fundamentals of how this coupling occurs. Specifically, we are interested in: 1) how predictable are anomalies in the stratospheric circulation? 2) why do only some stratospheric events modify surface weather? and 3) what is the mechanism whereby stratospheric anomalies reach the surface? While this last question may appear academic, several studies indicate that stratosphere-troposphere coupling drives the midlatitude tropospheric response to climate change; therefore, a clearer understanding of the mechanisms will aid in the interpretation of the upcoming changes in the surface climate.
I propose a multi-pronged effort aimed at addressing these questions and improving monthly forecasting. First, carefully designed modelling experiments using a novel modelling framework will be used to clarify how, and under what conditions, stratospheric variability couples to tropospheric variability. Second, novel linkages between variability external to the stratospheric polar vortex and the stratospheric polar vortex will be pursued, thus improving our ability to forecast polar vortex variability itself. To these ends, my group will develop 1) an analytic model for Rossby wave propagation on the sphere, and 2) a simplified general circulation model, which captures the essential processes underlying stratosphere-troposphere coupling. By combining output from the new models, observational data, and output from comprehensive climate models, the connections between the stratosphere and surface climate will be elucidated.
Summary
Anomalies in surface temperatures, winds, and precipitation can significantly alter energy supply and demand, cause flooding, and cripple transportation networks. Better management of these impacts can be achieved by extending the duration of reliable predictions of the atmospheric circulation.
Polar stratospheric variability can impact surface weather for well over a month, and this proposed research presents a novel approach towards understanding the fundamentals of how this coupling occurs. Specifically, we are interested in: 1) how predictable are anomalies in the stratospheric circulation? 2) why do only some stratospheric events modify surface weather? and 3) what is the mechanism whereby stratospheric anomalies reach the surface? While this last question may appear academic, several studies indicate that stratosphere-troposphere coupling drives the midlatitude tropospheric response to climate change; therefore, a clearer understanding of the mechanisms will aid in the interpretation of the upcoming changes in the surface climate.
I propose a multi-pronged effort aimed at addressing these questions and improving monthly forecasting. First, carefully designed modelling experiments using a novel modelling framework will be used to clarify how, and under what conditions, stratospheric variability couples to tropospheric variability. Second, novel linkages between variability external to the stratospheric polar vortex and the stratospheric polar vortex will be pursued, thus improving our ability to forecast polar vortex variability itself. To these ends, my group will develop 1) an analytic model for Rossby wave propagation on the sphere, and 2) a simplified general circulation model, which captures the essential processes underlying stratosphere-troposphere coupling. By combining output from the new models, observational data, and output from comprehensive climate models, the connections between the stratosphere and surface climate will be elucidated.
Max ERC Funding
1 808 000 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym FSC
Project Fast and Sound Cryptography: From Theoretical Foundations to Practical Constructions
Researcher (PI) Alon Rosen
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Much currently deployed cryptography is designed using more “art'” than “science,” and most of the schemes used in practice lack rigorous justification for their security. While theoretically sound designs do exist, they tend to be quite a bit slower to run and hence are not realistic from a practical point of view. This gap is especially evident in “low-level” cryptographic primitives, which are the building blocks that ultimately process the largest quantities of data.
Recent years have witnessed dramatic progress in the understanding of highly-parallelizable (local) cryptography, and in the construction of schemes based on the mathematics of geometric objects called lattices. Besides being based on firm theoretical foundations, these schemes also allow for very efficient implementations, especially on modern microprocessors. Yet despite all this recent progress, there has not yet been a major effort specifically focused on bringing the efficiency of such constructions as close as possible to practicality; this project will do exactly that.
The main goal of the Fast and Sound Cryptography project is to develop new tools and techniques that would lead to practical and theoretically sound implementations of cryptographic primitives. We plan to draw ideas from both theory and practice, and expect their combination to generate new questions, conjectures, and insights. A considerable fraction of our efforts will be devoted to demonstrating the efficiency of our constructions. This will be achieved by a concrete setting of parameters, allowing for cryptanalysis and direct performance comparison to popular designs.
While our initial focus will be on low-level primitives, we expect our research to also have direct impact on the practical efficiency of higher-level cryptographic tasks. Indeed, many of the recent improvements in the efficiency of lattice-based public-key cryptography can be traced back to research on the efficiency of lattice-based hash functions."
Summary
"Much currently deployed cryptography is designed using more “art'” than “science,” and most of the schemes used in practice lack rigorous justification for their security. While theoretically sound designs do exist, they tend to be quite a bit slower to run and hence are not realistic from a practical point of view. This gap is especially evident in “low-level” cryptographic primitives, which are the building blocks that ultimately process the largest quantities of data.
Recent years have witnessed dramatic progress in the understanding of highly-parallelizable (local) cryptography, and in the construction of schemes based on the mathematics of geometric objects called lattices. Besides being based on firm theoretical foundations, these schemes also allow for very efficient implementations, especially on modern microprocessors. Yet despite all this recent progress, there has not yet been a major effort specifically focused on bringing the efficiency of such constructions as close as possible to practicality; this project will do exactly that.
The main goal of the Fast and Sound Cryptography project is to develop new tools and techniques that would lead to practical and theoretically sound implementations of cryptographic primitives. We plan to draw ideas from both theory and practice, and expect their combination to generate new questions, conjectures, and insights. A considerable fraction of our efforts will be devoted to demonstrating the efficiency of our constructions. This will be achieved by a concrete setting of parameters, allowing for cryptanalysis and direct performance comparison to popular designs.
While our initial focus will be on low-level primitives, we expect our research to also have direct impact on the practical efficiency of higher-level cryptographic tasks. Indeed, many of the recent improvements in the efficiency of lattice-based public-key cryptography can be traced back to research on the efficiency of lattice-based hash functions."
Max ERC Funding
1 498 214 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym FTHPC
Project Fault Tolerant High Performance Computing
Researcher (PI) Oded Schwartz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Summary
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Max ERC Funding
1 824 467 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym FUNMANIA
Project Functional nano Materials for Neuronal Interfacing Applications
Researcher (PI) Yael Hanein
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE7, ERC-2012-StG_20111012
Summary Recent advances in nano technologies provide an exciting new tool-box best suited for stimulating and monitoring neurons at a very high accuracy and with improved bio-compatibility. In this project we propose the development of an innovative nano-material based platform to interface with neurons in-vivo, with unprecedented resolution. In particular we aim to form the building blocks for future sight restoration devices. By doing so we will address one of the most challenging and important applications in the realm of in-vivo neuronal stimulation: high-acuity artificial retina.
Existing technologies in the field of artificial retinas offer only very limited acuity and a radically new approach is needed to make the needed leap to achieve high-resolution stimulation. In this project we propose the development of flexible, electrically conducting, optically addressable and vertically aligned carbon nanotube based electrodes as a novel platform for targeting neurons at high fidelity. The morphology and density of the aligned tubes will mimic that of the retina photo-receptors to achieve record-high resolution.
The most challenging element of the project is the transduction from an optical signal to electrical activations at high resolution placing this effort at the forefront of nano-science and nano-technology research. To deal with this difficult challenge, vertically aligned carbon nanotubes will be conjugated with additional engineered materials, such as conducting polymers and quantum dots to build a supreme platform allowing unprecedented resolution and bio-compatibility. Ultimately, in this project we will focus on devising materials and processes that will become the building blocks of future devices so high density retinal implants and consequent sight restoration will become a reality in the conceivable future.
Summary
Recent advances in nano technologies provide an exciting new tool-box best suited for stimulating and monitoring neurons at a very high accuracy and with improved bio-compatibility. In this project we propose the development of an innovative nano-material based platform to interface with neurons in-vivo, with unprecedented resolution. In particular we aim to form the building blocks for future sight restoration devices. By doing so we will address one of the most challenging and important applications in the realm of in-vivo neuronal stimulation: high-acuity artificial retina.
Existing technologies in the field of artificial retinas offer only very limited acuity and a radically new approach is needed to make the needed leap to achieve high-resolution stimulation. In this project we propose the development of flexible, electrically conducting, optically addressable and vertically aligned carbon nanotube based electrodes as a novel platform for targeting neurons at high fidelity. The morphology and density of the aligned tubes will mimic that of the retina photo-receptors to achieve record-high resolution.
The most challenging element of the project is the transduction from an optical signal to electrical activations at high resolution placing this effort at the forefront of nano-science and nano-technology research. To deal with this difficult challenge, vertically aligned carbon nanotubes will be conjugated with additional engineered materials, such as conducting polymers and quantum dots to build a supreme platform allowing unprecedented resolution and bio-compatibility. Ultimately, in this project we will focus on devising materials and processes that will become the building blocks of future devices so high density retinal implants and consequent sight restoration will become a reality in the conceivable future.
Max ERC Funding
1 499 560 €
Duration
Start date: 2012-10-01, End date: 2018-09-30