Project acronym GRBS
Project Gamma Ray Bursts as a Focal Point of High Energy Astrophysics
Researcher (PI) Tsvi Piran
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE9, ERC-2008-AdG
Summary Gamma-Ray Bursts (GRBs), short and intense bursts of gamma-rays originating from random directions in the sky, are the brightest explosions in our Universe. They involve ultra-relativistic motion, huge magnetic fields, the strongest gravitational fields, acceleration of photons, neutrinos and cosmic rays to ultra high energies, the collapse of massive stars, mergers of neutron star binaries and formation of newborn black holes. They are at the focal point of relativistic high energy astrophysics and they serve as the best laboratory for extreme physics. The internal-external shocks model was formulated to explain their inner working. This model had impressive successes in interpreting and predicting GRB properties. Still it had left many fundamental questions unanswered. Furthermore, recently it has been confronted with puzzling Swift observations of the early afterglow and it is not clear if it needs minor revisions or a drastic overhaul. I describe here an extensive research program that deals with practically all aspects of GRB. From a technical point of view this program involves sophisticated state of the art computations on one hand, fundamental theory and phenomenological analysis of observations and data analysis on the other one. My goal is to address both old and new open question, considering, among other options the possibility that the current model has to be drastically revised. My long term goal, beyond understanding the inner working of GRBs, is to create a unified theory of accretion acceleration and collimation and of emission of high energy gamma-rays and relativistic particles that will synergize our understanding of GRBs, AGNs, Microquasars, galactic binary black holes SNRs and other high energy astrophysics phenomena. A second hope is to find ways to utilize GRBs to reveal new physics that cannot be explored otherwise.
Summary
Gamma-Ray Bursts (GRBs), short and intense bursts of gamma-rays originating from random directions in the sky, are the brightest explosions in our Universe. They involve ultra-relativistic motion, huge magnetic fields, the strongest gravitational fields, acceleration of photons, neutrinos and cosmic rays to ultra high energies, the collapse of massive stars, mergers of neutron star binaries and formation of newborn black holes. They are at the focal point of relativistic high energy astrophysics and they serve as the best laboratory for extreme physics. The internal-external shocks model was formulated to explain their inner working. This model had impressive successes in interpreting and predicting GRB properties. Still it had left many fundamental questions unanswered. Furthermore, recently it has been confronted with puzzling Swift observations of the early afterglow and it is not clear if it needs minor revisions or a drastic overhaul. I describe here an extensive research program that deals with practically all aspects of GRB. From a technical point of view this program involves sophisticated state of the art computations on one hand, fundamental theory and phenomenological analysis of observations and data analysis on the other one. My goal is to address both old and new open question, considering, among other options the possibility that the current model has to be drastically revised. My long term goal, beyond understanding the inner working of GRBs, is to create a unified theory of accretion acceleration and collimation and of emission of high energy gamma-rays and relativistic particles that will synergize our understanding of GRBs, AGNs, Microquasars, galactic binary black holes SNRs and other high energy astrophysics phenomena. A second hope is to find ways to utilize GRBs to reveal new physics that cannot be explored otherwise.
Max ERC Funding
1 933 460 €
Duration
Start date: 2009-01-01, End date: 2014-12-31
Project acronym HARMONIC
Project Discrete harmonic analysis for computer science
Researcher (PI) Yuval FILMUS
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory.
Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science.
The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Summary
Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory.
Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science.
The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Max ERC Funding
1 473 750 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym HIPS
Project High-Performance Secure Computation with Applications to Privacy and Cloud Security
Researcher (PI) Yehuda Lindell
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary "Secure two-party and multiparty computation has long stood at the center of the foundations of theoretical cryptography. However, in the last five years there has been blistering progress on the question of efficient secure computation. We are close to the stage that secure computation can be applied to real-world privacy and security problems. There is thus considerable interest in secure computation solutions from governments, military and security organisations, and industry. However, in order to answer the needs of secure computation in practice, there is still a need to make secure computation protocols much faster.
Until now, research in efficient cryptographic protocols has typically been in two different directions. The first direction, and the major one, is to construct more efficient protocols and prove them secure, where efficiency is measured by the amount of communication sent, the number of heavy cryptographic operations carried out (e.g., exponentiations), and so on. The second direction is to take the state-of-the-art protocols and implement them while optimising the implementation based on systems concerns. This latter direction has proven to improve the efficiency of existing protocols significantly, but is limited since it remains within the constraints of existing cryptographic approaches.
We propose a synergetic approach towards achieving high-performance secure computation. We will design new protocols while combining research from cryptography, algorithms and systems. In this way, issues like load balancing, memory management, cache-awareness, bandwidth bottlenecks, utilisation of parallel computing resources, and more, will be built into the cryptographic protocol and not considered merely as an afterthought. If successful, HIPS will enable the application of the beautiful theory of secure computation to the problems of privacy in the digital era, cloud security and more."
Summary
"Secure two-party and multiparty computation has long stood at the center of the foundations of theoretical cryptography. However, in the last five years there has been blistering progress on the question of efficient secure computation. We are close to the stage that secure computation can be applied to real-world privacy and security problems. There is thus considerable interest in secure computation solutions from governments, military and security organisations, and industry. However, in order to answer the needs of secure computation in practice, there is still a need to make secure computation protocols much faster.
Until now, research in efficient cryptographic protocols has typically been in two different directions. The first direction, and the major one, is to construct more efficient protocols and prove them secure, where efficiency is measured by the amount of communication sent, the number of heavy cryptographic operations carried out (e.g., exponentiations), and so on. The second direction is to take the state-of-the-art protocols and implement them while optimising the implementation based on systems concerns. This latter direction has proven to improve the efficiency of existing protocols significantly, but is limited since it remains within the constraints of existing cryptographic approaches.
We propose a synergetic approach towards achieving high-performance secure computation. We will design new protocols while combining research from cryptography, algorithms and systems. In this way, issues like load balancing, memory management, cache-awareness, bandwidth bottlenecks, utilisation of parallel computing resources, and more, will be built into the cryptographic protocol and not considered merely as an afterthought. If successful, HIPS will enable the application of the beautiful theory of secure computation to the problems of privacy in the digital era, cloud security and more."
Max ERC Funding
1 999 175 €
Duration
Start date: 2014-10-01, End date: 2019-09-30
Project acronym HIRESMEMMANIP
Project Spiking network mechanisms underlying short term memory
Researcher (PI) Eran Stark
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), LS5, ERC-2015-STG
Summary Short term memory (STM) is impaired at old age and a host of neuropsychiatric disorders, and has been the focus of a multitude of psychological and theoretical studies. However, the underlying neuronal circuit mechanisms remain elusive, mainly due to the lack of experimental tools: we suggest that rapid manipulations at the neuronal level are required for deciphering underlying mechanisms. We have developed an approach combining large-scale extracellular recordings and high density multi-site/multi-color optical stimulation (“diode-probes”), which enables high resolution closed-loop manipulation of multiple circuit elements in intact, free, behaving rodents. After training mice and rats to perform bridging-free STM-tasks, we will evaluate local circuit mechanisms in hippocampus and prefrontal cortex. Two broad classes of manipulations will be used: First, necessary components and timescales needed for STM maintenance will be established by focal real-time silencing of specific cell types and real-time jittering of spiking in those cells. Second, sufficient components (neuronal codes) will be determined by a circuit-training phase, in which novel associations between synthetic brain patterns and behaviorally-relevant short-term memory traces will be established. The first class is equivalent to erasing memories and the second to their writing. Together, these manipulations are expected to reveal global and local circuit mechanisms that facilitate STM maintenance in intact animals
Summary
Short term memory (STM) is impaired at old age and a host of neuropsychiatric disorders, and has been the focus of a multitude of psychological and theoretical studies. However, the underlying neuronal circuit mechanisms remain elusive, mainly due to the lack of experimental tools: we suggest that rapid manipulations at the neuronal level are required for deciphering underlying mechanisms. We have developed an approach combining large-scale extracellular recordings and high density multi-site/multi-color optical stimulation (“diode-probes”), which enables high resolution closed-loop manipulation of multiple circuit elements in intact, free, behaving rodents. After training mice and rats to perform bridging-free STM-tasks, we will evaluate local circuit mechanisms in hippocampus and prefrontal cortex. Two broad classes of manipulations will be used: First, necessary components and timescales needed for STM maintenance will be established by focal real-time silencing of specific cell types and real-time jittering of spiking in those cells. Second, sufficient components (neuronal codes) will be determined by a circuit-training phase, in which novel associations between synthetic brain patterns and behaviorally-relevant short-term memory traces will be established. The first class is equivalent to erasing memories and the second to their writing. Together, these manipulations are expected to reveal global and local circuit mechanisms that facilitate STM maintenance in intact animals
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym HOLI
Project Deep Learning for Holistic Inference
Researcher (PI) Amir Globerson
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Summary
Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Max ERC Funding
1 932 500 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym HOWPER
Project An open or closed process: Determining the global scheme of perception
Researcher (PI) Ehud AHISSAR
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), LS5, ERC-2017-ADG
Summary Despite decades of intensive research, there is no agreement about the general scheme of perception: Is the external object a trigger for a brain-internal process (open-loop perception, OLP) or is the object included in brain dynamics during the entire perceptual process (closed-loop perception, CLP)? HOWPER is designed to provide a definite answer to this question in the cases of human touch and vision. What enables this critical test is our development of an explicit CLP hypothesis, which will be contrasted, via specific testable predictions, with the OLP scheme. In the event that CLP is validated, HOWPER will introduce a radical paradigm shift in the study of perception, since almost all current experiments are guided, implicitly or explicitly, by the OLP scheme. If OLP is confirmed, HOWPER will provide the first formal affirmation for its superiority over CLP.
Our approach in this novel paradigm is based on a triangle of interactive efforts comprising theory, analytical experiments, and synthetic experiments. The theoretical effort (WP1) will be based on the core theoretical framework already developed in our lab. The analytical experiments (WP2) will involve human perceivers. The synthetic experiments (WP3) will be performed on synthesized artificial perceivers. The fourth WP will exploit our novel rat-machine hybrid model for testing the neural applicability of the insights gained in the other WPs, whereas the fifth WP will translate our insights into novel visual-to-tactile sensory substitution algorithms.
HOWPER is expected to either revolutionize or significantly advance the field of human perception, to greatly improve visual to tactile sensory substitution approaches and to contribute novel biomimetic algorithms for autonomous robotic agents.
Summary
Despite decades of intensive research, there is no agreement about the general scheme of perception: Is the external object a trigger for a brain-internal process (open-loop perception, OLP) or is the object included in brain dynamics during the entire perceptual process (closed-loop perception, CLP)? HOWPER is designed to provide a definite answer to this question in the cases of human touch and vision. What enables this critical test is our development of an explicit CLP hypothesis, which will be contrasted, via specific testable predictions, with the OLP scheme. In the event that CLP is validated, HOWPER will introduce a radical paradigm shift in the study of perception, since almost all current experiments are guided, implicitly or explicitly, by the OLP scheme. If OLP is confirmed, HOWPER will provide the first formal affirmation for its superiority over CLP.
Our approach in this novel paradigm is based on a triangle of interactive efforts comprising theory, analytical experiments, and synthetic experiments. The theoretical effort (WP1) will be based on the core theoretical framework already developed in our lab. The analytical experiments (WP2) will involve human perceivers. The synthetic experiments (WP3) will be performed on synthesized artificial perceivers. The fourth WP will exploit our novel rat-machine hybrid model for testing the neural applicability of the insights gained in the other WPs, whereas the fifth WP will translate our insights into novel visual-to-tactile sensory substitution algorithms.
HOWPER is expected to either revolutionize or significantly advance the field of human perception, to greatly improve visual to tactile sensory substitution approaches and to contribute novel biomimetic algorithms for autonomous robotic agents.
Max ERC Funding
2 493 441 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym Human Decisions
Project The Neural Determinants of Perceptual Decision Making in the Human Brain
Researcher (PI) Redmond O'connell
Host Institution (HI) THE PROVOST, FELLOWS, FOUNDATION SCHOLARS & THE OTHER MEMBERS OF BOARD OF THE COLLEGE OF THE HOLY & UNDIVIDED TRINITY OF QUEEN ELIZABETH NEAR DUBLIN
Call Details Starting Grant (StG), LS5, ERC-2014-STG
Summary How do we make reliable decisions given sensory information that is often weak or ambiguous? Current theories center on a brain mechanism whereby sensory evidence is integrated over time into a “decision variable” which triggers the appropriate action upon reaching a criterion. Neural signals fitting this role have been identified in monkey electrophysiology but efforts to study the neural dynamics underpinning human decision making have been hampered by technical challenges associated with non-invasive recording. This proposal builds on a recent paradigm breakthrough made by the applicant that enables parallel tracking of discrete neural signals that can be unambiguously linked to the three key information processing stages necessary for simple perceptual decisions: sensory encoding, decision formation and motor preparation. Chief among these is a freely-evolving decision variable signal which builds at an evidence-dependent rate up to an action-triggering threshold and precisely determines the timing and accuracy of perceptual reports at the single-trial level. This provides an unprecedented neurophysiological window onto the distinct parameters of the human decision process such that the underlying mechanisms of several major behavioral phenomena can finally be investigated. This proposal seeks to develop a systems-level understanding of perceptual decision making in the human brain by tackling three core questions: 1) what are the neural adaptations that allow us to deal with speed pressure and variations in the reliability of the physically presented evidence? 2) What neural mechanism determines our subjective confidence in a decision? and 3) How does aging impact on the distinct neural components underpinning perceptual decision making? Each of the experiments described in this proposal will definitively test key predictions from prominent theoretical models using a combination of temporally precise neurophysiological measurement and psychophysical modelling.
Summary
How do we make reliable decisions given sensory information that is often weak or ambiguous? Current theories center on a brain mechanism whereby sensory evidence is integrated over time into a “decision variable” which triggers the appropriate action upon reaching a criterion. Neural signals fitting this role have been identified in monkey electrophysiology but efforts to study the neural dynamics underpinning human decision making have been hampered by technical challenges associated with non-invasive recording. This proposal builds on a recent paradigm breakthrough made by the applicant that enables parallel tracking of discrete neural signals that can be unambiguously linked to the three key information processing stages necessary for simple perceptual decisions: sensory encoding, decision formation and motor preparation. Chief among these is a freely-evolving decision variable signal which builds at an evidence-dependent rate up to an action-triggering threshold and precisely determines the timing and accuracy of perceptual reports at the single-trial level. This provides an unprecedented neurophysiological window onto the distinct parameters of the human decision process such that the underlying mechanisms of several major behavioral phenomena can finally be investigated. This proposal seeks to develop a systems-level understanding of perceptual decision making in the human brain by tackling three core questions: 1) what are the neural adaptations that allow us to deal with speed pressure and variations in the reliability of the physically presented evidence? 2) What neural mechanism determines our subjective confidence in a decision? and 3) How does aging impact on the distinct neural components underpinning perceptual decision making? Each of the experiments described in this proposal will definitively test key predictions from prominent theoretical models using a combination of temporally precise neurophysiological measurement and psychophysical modelling.
Max ERC Funding
1 382 643 €
Duration
Start date: 2015-05-01, End date: 2020-04-30
Project acronym HumanTrafficking
Project Human Trafficking: A Labor Perspective
Researcher (PI) Hila Shamir
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), SH2, ERC-2017-STG
Summary This project conducts a theoretical, methodological, and normative paradigm shift in the research and analysis of human trafficking, one of the most pressing moral and political challenges of our times. It moves away from the currently predominant approach to trafficking, which focuses on criminal law, border control, and human rights, towards a labor-based approach that targets the structure of labor markets that are prone to severely exploitative labor practices. This shift represents an essential development both in the research of migratory labor practices and in the process of designing more effective, and more just, anti-trafficking measures, that are context-sensitive as well as cognizant to global legal and economic trends. The project will include four main parts: 1) Theoretical: articulating and justifying the proposed shift on trafficking from individual rights and culpabilities to structural labor market realities. 2) Case-studies: conducting a multidisciplinary study of a series of innovative case studies, in which the labor context emerges as a significant factor in the trafficking nexus – bilateral agreements on migration, national regulations of labor standards and recruiters, unionization, and voluntary corporate codes of conduct. The case studies analysis employs the labor paradigm in elucidating the structural conditions that underlie trafficking, reveal a thus-far mostly unrecognized and under-theorized set of anti-trafficking tools. 3) Clinical Laboratory: collaborating with TAUs Workers' Rights clinic to create a legal laboratory in which the potential and limits of the tools examined in the case studies will be tested. 4) Normative: assessing the success of existing strategies and expanding on them to devise innovative tools for a just, practicable, and effective anti-trafficking policy, that can reach significantly more individuals vulnerable to trafficking, by providing them with legal mechanisms for avoiding and resisting exploitation.
Summary
This project conducts a theoretical, methodological, and normative paradigm shift in the research and analysis of human trafficking, one of the most pressing moral and political challenges of our times. It moves away from the currently predominant approach to trafficking, which focuses on criminal law, border control, and human rights, towards a labor-based approach that targets the structure of labor markets that are prone to severely exploitative labor practices. This shift represents an essential development both in the research of migratory labor practices and in the process of designing more effective, and more just, anti-trafficking measures, that are context-sensitive as well as cognizant to global legal and economic trends. The project will include four main parts: 1) Theoretical: articulating and justifying the proposed shift on trafficking from individual rights and culpabilities to structural labor market realities. 2) Case-studies: conducting a multidisciplinary study of a series of innovative case studies, in which the labor context emerges as a significant factor in the trafficking nexus – bilateral agreements on migration, national regulations of labor standards and recruiters, unionization, and voluntary corporate codes of conduct. The case studies analysis employs the labor paradigm in elucidating the structural conditions that underlie trafficking, reveal a thus-far mostly unrecognized and under-theorized set of anti-trafficking tools. 3) Clinical Laboratory: collaborating with TAUs Workers' Rights clinic to create a legal laboratory in which the potential and limits of the tools examined in the case studies will be tested. 4) Normative: assessing the success of existing strategies and expanding on them to devise innovative tools for a just, practicable, and effective anti-trafficking policy, that can reach significantly more individuals vulnerable to trafficking, by providing them with legal mechanisms for avoiding and resisting exploitation.
Max ERC Funding
1 492 250 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym iEXTRACT
Project Information Extraction for Everyone
Researcher (PI) Yoav Goldberg
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward.
Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language.
I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query
unstructured texts via easily structured forms.
More specifically, I aim develop:
(a) a novel structured representation that abstracts much of the complexity of natural language;
(b) algorithms that derive these representations from texts;
(c) an accessible rule language to query this representation;
(d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention.
The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of
domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and
advance their profession.
Summary
Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward.
Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language.
I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query
unstructured texts via easily structured forms.
More specifically, I aim develop:
(a) a novel structured representation that abstracts much of the complexity of natural language;
(b) algorithms that derive these representations from texts;
(c) an accessible rule language to query this representation;
(d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention.
The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of
domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and
advance their profession.
Max ERC Funding
1 499 354 €
Duration
Start date: 2019-05-01, End date: 2024-04-30
Project acronym IMMUNE/MEMORY AGING
Project Can immune system rejuvenation restore age-related memory loss?
Researcher (PI) Michal Eisenbach-Schwartz
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), LS5, ERC-2008-AdG
Summary With increased life expectancy, there has been a critical growth in the portion of the population that suffers from age-related cognitive decline and dementia. Attempts are therefore being made to find ways to slow brain-aging processes; successful therapies would have a significant impact on the quality of life of individuals, and decrease healthcare expenditures. Aging of the immune system has never been suggested as a factor in memory loss. My group formulated the concept of protective autoimmunity , suggesting a linkage between immunity and self-maintenance in the context of the brain in health and disease. Recently, we showed that T lymphocytes recognizing brain-self antigens have a pivotal role in maintaining hippocampal plasticity, as manifested by reduced neurogenesis and impaired cognitive abilities in T-cell deficient mice. Taken together, our novel observations that T cell immunity contributes to hippocampal plasticity, and the fact that T cell immunity decreases with progressive aging create the basis for the present proposal. We will focus on the following questions: (a) Which aspects of cognition are supported by the immune system- learning, memory or both; (b) whether aging of the immune system is sufficient to induce aging of the brain; (c) whether activation of the immune system is sufficient to reverse age-related cognitive decline; (d) the mechanism underlying the effect of peripheral immunity on brain cognition; and (e) potential therapeutic implications of our findings. Our preliminary results demonstrate that the immune system contributes to spatial memory, and that imposing an immune deficiency is sufficient to cause a reversible memory deficit. These findings give strong reason for optimism that memory loss in the elderly is preventable and perhaps reversible by immune-based therapies; we hope that, in the not too distant future, our studies will enable development of a vaccine to prevent CNS aging and cognitive loss in elderly.
Summary
With increased life expectancy, there has been a critical growth in the portion of the population that suffers from age-related cognitive decline and dementia. Attempts are therefore being made to find ways to slow brain-aging processes; successful therapies would have a significant impact on the quality of life of individuals, and decrease healthcare expenditures. Aging of the immune system has never been suggested as a factor in memory loss. My group formulated the concept of protective autoimmunity , suggesting a linkage between immunity and self-maintenance in the context of the brain in health and disease. Recently, we showed that T lymphocytes recognizing brain-self antigens have a pivotal role in maintaining hippocampal plasticity, as manifested by reduced neurogenesis and impaired cognitive abilities in T-cell deficient mice. Taken together, our novel observations that T cell immunity contributes to hippocampal plasticity, and the fact that T cell immunity decreases with progressive aging create the basis for the present proposal. We will focus on the following questions: (a) Which aspects of cognition are supported by the immune system- learning, memory or both; (b) whether aging of the immune system is sufficient to induce aging of the brain; (c) whether activation of the immune system is sufficient to reverse age-related cognitive decline; (d) the mechanism underlying the effect of peripheral immunity on brain cognition; and (e) potential therapeutic implications of our findings. Our preliminary results demonstrate that the immune system contributes to spatial memory, and that imposing an immune deficiency is sufficient to cause a reversible memory deficit. These findings give strong reason for optimism that memory loss in the elderly is preventable and perhaps reversible by immune-based therapies; we hope that, in the not too distant future, our studies will enable development of a vaccine to prevent CNS aging and cognitive loss in elderly.
Max ERC Funding
1 650 000 €
Duration
Start date: 2009-01-01, End date: 2012-12-31