Project acronym 3D-REPAIR
Project Spatial organization of DNA repair within the nucleus
Researcher (PI) Evanthia Soutoglou
Host Institution (HI) THE UNIVERSITY OF SUSSEX
Country United Kingdom
Call Details Consolidator Grant (CoG), LS2, ERC-2015-CoG
Summary Faithful repair of double stranded DNA breaks (DSBs) is essential, as they are at the origin of genome instability, chromosomal translocations and cancer. Cells repair DSBs through different pathways, which can be faithful or mutagenic, and the balance between them at a given locus must be tightly regulated to preserve genome integrity. Although, much is known about DSB repair factors, how the choice between pathways is controlled within the nuclear environment is not understood. We have shown that nuclear architecture and non-random genome organization determine the frequency of chromosomal translocations and that pathway choice is dictated by the spatial organization of DNA in the nucleus. Nevertheless, what determines which pathway is activated in response to DSBs at specific genomic locations is not understood. Furthermore, the impact of 3D-genome folding on the kinetics and efficiency of DSB repair is completely unknown.
Here we aim to understand how nuclear compartmentalization, chromatin structure and genome organization impact on the efficiency of detection, signaling and repair of DSBs. We will unravel what determines the DNA repair specificity within distinct nuclear compartments using protein tethering, promiscuous biotinylation and quantitative proteomics. We will determine how DNA repair is orchestrated at different heterochromatin structures using a CRISPR/Cas9-based system that allows, for the first time robust induction of DSBs at specific heterochromatin compartments. Finally, we will investigate the role of 3D-genome folding in the kinetics of DNA repair and pathway choice using single nucleotide resolution DSB-mapping coupled to 3D-topological maps.
This proposal has significant implications for understanding the mechanisms controlling DNA repair within the nuclear environment and will reveal the regions of the genome that are susceptible to genomic instability and help us understand why certain mutations and translocations are recurrent in cancer
Summary
Faithful repair of double stranded DNA breaks (DSBs) is essential, as they are at the origin of genome instability, chromosomal translocations and cancer. Cells repair DSBs through different pathways, which can be faithful or mutagenic, and the balance between them at a given locus must be tightly regulated to preserve genome integrity. Although, much is known about DSB repair factors, how the choice between pathways is controlled within the nuclear environment is not understood. We have shown that nuclear architecture and non-random genome organization determine the frequency of chromosomal translocations and that pathway choice is dictated by the spatial organization of DNA in the nucleus. Nevertheless, what determines which pathway is activated in response to DSBs at specific genomic locations is not understood. Furthermore, the impact of 3D-genome folding on the kinetics and efficiency of DSB repair is completely unknown.
Here we aim to understand how nuclear compartmentalization, chromatin structure and genome organization impact on the efficiency of detection, signaling and repair of DSBs. We will unravel what determines the DNA repair specificity within distinct nuclear compartments using protein tethering, promiscuous biotinylation and quantitative proteomics. We will determine how DNA repair is orchestrated at different heterochromatin structures using a CRISPR/Cas9-based system that allows, for the first time robust induction of DSBs at specific heterochromatin compartments. Finally, we will investigate the role of 3D-genome folding in the kinetics of DNA repair and pathway choice using single nucleotide resolution DSB-mapping coupled to 3D-topological maps.
This proposal has significant implications for understanding the mechanisms controlling DNA repair within the nuclear environment and will reveal the regions of the genome that are susceptible to genomic instability and help us understand why certain mutations and translocations are recurrent in cancer
Max ERC Funding
1 999 750 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym ALUNIF
Project Algorithms and Lower Bounds: A Unified Approach
Researcher (PI) Rahul Santhanam
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.
Summary
One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.
Max ERC Funding
1 274 496 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym ALZSYN
Project Imaging synaptic contributors to dementia
Researcher (PI) Tara Spires-Jones
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Country United Kingdom
Call Details Consolidator Grant (CoG), LS5, ERC-2015-CoG
Summary Alzheimer's disease, the most common cause of dementia in older people, is a devastating condition that is becoming a public health crisis as our population ages. Despite great progress recently in Alzheimer’s disease research, we have no disease modifying drugs and a decade with a 99.6% failure rate of clinical trials attempting to treat the disease. This project aims to develop relevant therapeutic targets to restore brain function in Alzheimer’s disease by integrating human and model studies of synapses. It is widely accepted in the field that alterations in amyloid beta initiate the disease process. However the cascade leading from changes in amyloid to widespread tau pathology and neurodegeneration remain unclear. Synapse loss is the strongest pathological correlate of dementia in Alzheimer’s, and mounting evidence suggests that synapse degeneration plays a key role in causing cognitive decline. Here I propose to test the hypothesis that the amyloid cascade begins at the synapse leading to tau pathology, synapse dysfunction and loss, and ultimately neural circuit collapse causing cognitive impairment. The team will use cutting-edge multiphoton and array tomography imaging techniques to test mechanisms downstream of amyloid beta at synapses, and determine whether intervening in the cascade allows recovery of synapse structure and function. Importantly, I will combine studies in robust models of familial Alzheimer’s disease with studies in postmortem human brain to confirm relevance of our mechanistic studies to human disease. Finally, human stem cell derived neurons will be used to test mechanisms and potential therapeutics in neurons expressing the human proteome. Together, these experiments are ground-breaking since they have the potential to further our understanding of how synapses are lost in Alzheimer’s disease and to identify targets for effective therapeutic intervention, which is a critical unmet need in today’s health care system.
Summary
Alzheimer's disease, the most common cause of dementia in older people, is a devastating condition that is becoming a public health crisis as our population ages. Despite great progress recently in Alzheimer’s disease research, we have no disease modifying drugs and a decade with a 99.6% failure rate of clinical trials attempting to treat the disease. This project aims to develop relevant therapeutic targets to restore brain function in Alzheimer’s disease by integrating human and model studies of synapses. It is widely accepted in the field that alterations in amyloid beta initiate the disease process. However the cascade leading from changes in amyloid to widespread tau pathology and neurodegeneration remain unclear. Synapse loss is the strongest pathological correlate of dementia in Alzheimer’s, and mounting evidence suggests that synapse degeneration plays a key role in causing cognitive decline. Here I propose to test the hypothesis that the amyloid cascade begins at the synapse leading to tau pathology, synapse dysfunction and loss, and ultimately neural circuit collapse causing cognitive impairment. The team will use cutting-edge multiphoton and array tomography imaging techniques to test mechanisms downstream of amyloid beta at synapses, and determine whether intervening in the cascade allows recovery of synapse structure and function. Importantly, I will combine studies in robust models of familial Alzheimer’s disease with studies in postmortem human brain to confirm relevance of our mechanistic studies to human disease. Finally, human stem cell derived neurons will be used to test mechanisms and potential therapeutics in neurons expressing the human proteome. Together, these experiments are ground-breaking since they have the potential to further our understanding of how synapses are lost in Alzheimer’s disease and to identify targets for effective therapeutic intervention, which is a critical unmet need in today’s health care system.
Max ERC Funding
2 000 000 €
Duration
Start date: 2016-11-01, End date: 2021-10-31
Project acronym BAYES-KNOWLEDGE
Project Effective Bayesian Modelling with Knowledge before Data
Researcher (PI) Norman Fenton
Host Institution (HI) QUEEN MARY UNIVERSITY OF LONDON
Country United Kingdom
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary This project aims to improve evidence-based decision-making. What makes it radical is that it plans to do this in situations (common for critical risk assessment problems) where there is little or even no data, and hence where traditional statistics cannot be used. To address this problem Bayesian analysis, which enables domain experts to supplement observed data with subjective probabilities, is normally used. As real-world problems typically involve multiple uncertain variables, Bayesian analysis is extended using a technique called Bayesian networks (BNs). But, despite many great benefits, BNs have been under-exploited, especially in areas where they offer the greatest potential for improvements (law, medicine and systems engineering). This is mainly because of widespread resistance to relying on subjective knowledge. To address this problem much current research assumes sufficient data are available to make the expert’s input minimal or even redundant; with such data it may be possible to ‘learn’ the underlying BN model. But this approach offers nothing when there is limited or no data. Even when ‘big’ data are available the resulting models may be superficially objective but fundamentally flawed as they fail to capture the underlying causal structure that only expert knowledge can provide.
Our solution is to develop a method to systemize the way expert driven causal BN models can be built and used effectively either in the absence of data or as a means of determining what future data is really required. The method involves a new way of framing problems and extensions to BN theory, notation and tools. Working with relevant domain experts, along with cognitive psychologists, our methods will be developed and tested experimentally on real-world critical decision-problems in medicine, law, forensics, and transport. As the work complements current data-driven approaches, it will lead to improved BN modelling both when there is extensive data as well as none.
Summary
This project aims to improve evidence-based decision-making. What makes it radical is that it plans to do this in situations (common for critical risk assessment problems) where there is little or even no data, and hence where traditional statistics cannot be used. To address this problem Bayesian analysis, which enables domain experts to supplement observed data with subjective probabilities, is normally used. As real-world problems typically involve multiple uncertain variables, Bayesian analysis is extended using a technique called Bayesian networks (BNs). But, despite many great benefits, BNs have been under-exploited, especially in areas where they offer the greatest potential for improvements (law, medicine and systems engineering). This is mainly because of widespread resistance to relying on subjective knowledge. To address this problem much current research assumes sufficient data are available to make the expert’s input minimal or even redundant; with such data it may be possible to ‘learn’ the underlying BN model. But this approach offers nothing when there is limited or no data. Even when ‘big’ data are available the resulting models may be superficially objective but fundamentally flawed as they fail to capture the underlying causal structure that only expert knowledge can provide.
Our solution is to develop a method to systemize the way expert driven causal BN models can be built and used effectively either in the absence of data or as a means of determining what future data is really required. The method involves a new way of framing problems and extensions to BN theory, notation and tools. Working with relevant domain experts, along with cognitive psychologists, our methods will be developed and tested experimentally on real-world critical decision-problems in medicine, law, forensics, and transport. As the work complements current data-driven approaches, it will lead to improved BN modelling both when there is extensive data as well as none.
Max ERC Funding
1 572 562 €
Duration
Start date: 2014-04-01, End date: 2018-03-31
Project acronym BAYNET
Project Bayesian Networks and Non-Rational Expectations
Researcher (PI) Ran SPIEGLER
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Country United Kingdom
Call Details Advanced Grant (AdG), SH1, ERC-2015-AdG
Summary "This project will develop a new framework for modeling economic agents having ""boundedly rational expectations"" (BRE). It is based on the concept of Bayesian networks (more generally, graphical models), borrowed from statistics and AI. In the framework's basic version, an agent is characterized by a directed acyclic graph (DAG) over the set of all relevant random variables. The DAG is the agent's ""type"" – it represents how he systematically distorts any objective probability distribution into a subjective belief. Technically, the distortion takes the form of the standard Bayesian-network factorization formula given by the agent's DAG. The agent's choice is modeled as a ""personal equilibrium"", because his subjective belief regarding the implications of his actions can vary with his own long-run behavior. The DAG representation unifies and simplifies existing models of BRE, subsuming them as special cases corresponding to distinct graphical representations. It captures hitherto-unmodeled fallacies such as reverse causation. The framework facilitates behavioral characterizations of general classes of models of BRE and expands their applicability. I will demonstrate this with applications to monetary policy, behavioral I.O., asset pricing, etc. I will extend the basic formalism to multi-agent environments, addressing issues beyond the reach of current models of BRE (e.g., formalizing the notion of ""high-order"" limited understanding of statistical regularities). Finally, I will seek a learning foundation for the graphical representation of BRE, in the sense that it will capture how the agent extrapolates his belief from a dataset (drawn from the objective distribution) containing ""missing values"", via some intuitive ""imputation method"". This part, too, borrows ideas from statistics and AI, further demonstrating the project's interdisciplinary nature."
Summary
"This project will develop a new framework for modeling economic agents having ""boundedly rational expectations"" (BRE). It is based on the concept of Bayesian networks (more generally, graphical models), borrowed from statistics and AI. In the framework's basic version, an agent is characterized by a directed acyclic graph (DAG) over the set of all relevant random variables. The DAG is the agent's ""type"" – it represents how he systematically distorts any objective probability distribution into a subjective belief. Technically, the distortion takes the form of the standard Bayesian-network factorization formula given by the agent's DAG. The agent's choice is modeled as a ""personal equilibrium"", because his subjective belief regarding the implications of his actions can vary with his own long-run behavior. The DAG representation unifies and simplifies existing models of BRE, subsuming them as special cases corresponding to distinct graphical representations. It captures hitherto-unmodeled fallacies such as reverse causation. The framework facilitates behavioral characterizations of general classes of models of BRE and expands their applicability. I will demonstrate this with applications to monetary policy, behavioral I.O., asset pricing, etc. I will extend the basic formalism to multi-agent environments, addressing issues beyond the reach of current models of BRE (e.g., formalizing the notion of ""high-order"" limited understanding of statistical regularities). Finally, I will seek a learning foundation for the graphical representation of BRE, in the sense that it will capture how the agent extrapolates his belief from a dataset (drawn from the objective distribution) containing ""missing values"", via some intuitive ""imputation method"". This part, too, borrows ideas from statistics and AI, further demonstrating the project's interdisciplinary nature."
Max ERC Funding
1 379 288 €
Duration
Start date: 2016-07-01, End date: 2022-06-30
Project acronym BEEHIVE
Project Bridging the Evolution and Epidemiology of HIV in Europe
Researcher (PI) Christopher Fraser
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Advanced Grant (AdG), LS2, ERC-2013-ADG
Summary The aim of the BEEHIVE project is to generate novel insight into HIV biology, evolution and epidemiology, leveraging next-generation high-throughput sequencing and bioinformatics to produce and analyse whole-genomes of viruses from approximately 3,000 European HIV-1 infected patients. These patients have known dates of infection spread over the last 25 years, good clinical follow up, and a wide range of clinical prognostic indicators and outcomes. The primary objective is to discover the viral genetic determinants of severity of infection and set-point viral load. This primary objective is high-risk & blue-skies: there is ample indirect evidence of polymorphisms that alter virulence, but they have never been identified, and it is not known how easy they are to discover. However, the project is also high-reward: it could lead to a substantial shift in the understanding of HIV disease.
Technologically, the BEEHIVE project will deliver new approaches for undertaking whole genome association studies on RNA viruses, including delivering an innovative high-throughput bioinformatics pipeline for handling genetically diverse viral quasi-species data (with viral diversity both within and between infected patients).
The project also includes secondary and tertiary objectives that address critical open questions in HIV epidemiology and evolution. The secondary objective is to use viral genetic sequences allied to mathematical epidemic models to better understand the resurgent European epidemic amongst high-risk groups, especially men who have sex with men. The aim will not just be to establish who is at risk of infection, which is known from conventional epidemiological approaches, but also to characterise the risk factors for onwards transmission of the virus. Tertiary objectives involve understanding the relationship between the genetic diversity within viral samples, indicative of on-going evolution or dual infections, to clinical outcomes.
Summary
The aim of the BEEHIVE project is to generate novel insight into HIV biology, evolution and epidemiology, leveraging next-generation high-throughput sequencing and bioinformatics to produce and analyse whole-genomes of viruses from approximately 3,000 European HIV-1 infected patients. These patients have known dates of infection spread over the last 25 years, good clinical follow up, and a wide range of clinical prognostic indicators and outcomes. The primary objective is to discover the viral genetic determinants of severity of infection and set-point viral load. This primary objective is high-risk & blue-skies: there is ample indirect evidence of polymorphisms that alter virulence, but they have never been identified, and it is not known how easy they are to discover. However, the project is also high-reward: it could lead to a substantial shift in the understanding of HIV disease.
Technologically, the BEEHIVE project will deliver new approaches for undertaking whole genome association studies on RNA viruses, including delivering an innovative high-throughput bioinformatics pipeline for handling genetically diverse viral quasi-species data (with viral diversity both within and between infected patients).
The project also includes secondary and tertiary objectives that address critical open questions in HIV epidemiology and evolution. The secondary objective is to use viral genetic sequences allied to mathematical epidemic models to better understand the resurgent European epidemic amongst high-risk groups, especially men who have sex with men. The aim will not just be to establish who is at risk of infection, which is known from conventional epidemiological approaches, but also to characterise the risk factors for onwards transmission of the virus. Tertiary objectives involve understanding the relationship between the genetic diversity within viral samples, indicative of on-going evolution or dual infections, to clinical outcomes.
Max ERC Funding
2 499 739 €
Duration
Start date: 2014-04-01, End date: 2019-03-31
Project acronym BESTDECISION
Project "Behavioural Economics and Strategic Decision Making: Theory, Empirics, and Experiments"
Researcher (PI) Vincent Paul Crawford
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Advanced Grant (AdG), SH1, ERC-2013-ADG
Summary "I will study questions of central microeconomic importance via interwoven theoretical, empirical, and experimental analyses, from a behavioural perspective combining standard methods with assumptions that better reflect evidence on behaviour and psychological insights. The contributions of behavioural economics have been widely recognized, but the benefits of its insights are far from fully realized. I propose four lines of inquiry that focus on how institutions interact with cognition and behaviour, chosen for their potential to reshape our understanding of important questions and their synergies across lines.
The first line will study nonparametric identification and estimation of reference-dependent versions of the standard microeconomic model of consumer demand or labour supply, the subject of hundreds of empirical studies and perhaps the single most important model in microeconomics. It will allow such studies to consider relevant behavioural factors without imposing structural assumptions as in previous work.
The second line will analyze history-dependent learning in financial crises theoretically and experimentally, with the goal of quantifying how market structure influences the likelihood of a crisis.
The third line will study strategic thinking experimentally, using a powerful new design that links subjects’ searches for hidden payoff information (“eye-movements”) much more directly to thinking.
The fourth line will significantly advance Myerson and Satterthwaite’s analyses of optimal design of bargaining rules and auctions, which first went beyond the analysis of given institutions to study what is possible by designing new institutions, replacing their equilibrium assumption with a nonequilibrium model that is well supported by experiments.
The synergies among these four lines’ theoretical analyses, empirical methods, and data analyses will accelerate progress on each line well beyond what would be possible in a piecemeal approach."
Summary
"I will study questions of central microeconomic importance via interwoven theoretical, empirical, and experimental analyses, from a behavioural perspective combining standard methods with assumptions that better reflect evidence on behaviour and psychological insights. The contributions of behavioural economics have been widely recognized, but the benefits of its insights are far from fully realized. I propose four lines of inquiry that focus on how institutions interact with cognition and behaviour, chosen for their potential to reshape our understanding of important questions and their synergies across lines.
The first line will study nonparametric identification and estimation of reference-dependent versions of the standard microeconomic model of consumer demand or labour supply, the subject of hundreds of empirical studies and perhaps the single most important model in microeconomics. It will allow such studies to consider relevant behavioural factors without imposing structural assumptions as in previous work.
The second line will analyze history-dependent learning in financial crises theoretically and experimentally, with the goal of quantifying how market structure influences the likelihood of a crisis.
The third line will study strategic thinking experimentally, using a powerful new design that links subjects’ searches for hidden payoff information (“eye-movements”) much more directly to thinking.
The fourth line will significantly advance Myerson and Satterthwaite’s analyses of optimal design of bargaining rules and auctions, which first went beyond the analysis of given institutions to study what is possible by designing new institutions, replacing their equilibrium assumption with a nonequilibrium model that is well supported by experiments.
The synergies among these four lines’ theoretical analyses, empirical methods, and data analyses will accelerate progress on each line well beyond what would be possible in a piecemeal approach."
Max ERC Funding
1 985 373 €
Duration
Start date: 2014-04-01, End date: 2019-03-31
Project acronym BIGBAYES
Project Rich, Structured and Efficient Learning of Big Bayesian Models
Researcher (PI) Yee Whye Teh
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary As datasets grow ever larger in scale, complexity and variety, there is an increasing need for powerful machine learning and statistical techniques that are capable of learning from such data. Bayesian nonparametrics is a promising approach to data analysis that is increasingly popular in machine learning and statistics. Bayesian nonparametric models are highly flexible models with infinite-dimensional parameter spaces that can be used to directly parameterise and learn about functions, densities, conditional distributions etc, and have been successfully applied to regression, survival analysis, language modelling, time series analysis, and visual scene analysis among others. However, to successfully use Bayesian nonparametric models to analyse the high-dimensional and structured datasets now commonly encountered in the age of Big Data, we will have to overcome a number of challenges. Namely, we need to develop Bayesian nonparametric models that can learn rich representations from structured data, and we need computational methodologies that can scale effectively to the large and complex models of the future. We will ground our developments in relevant applications, particularly to natural language processing (learning distributed representations for language modelling and compositional semantics) and genetics (modelling genetic variations arising from population, genealogical and spatial structures).
Summary
As datasets grow ever larger in scale, complexity and variety, there is an increasing need for powerful machine learning and statistical techniques that are capable of learning from such data. Bayesian nonparametrics is a promising approach to data analysis that is increasingly popular in machine learning and statistics. Bayesian nonparametric models are highly flexible models with infinite-dimensional parameter spaces that can be used to directly parameterise and learn about functions, densities, conditional distributions etc, and have been successfully applied to regression, survival analysis, language modelling, time series analysis, and visual scene analysis among others. However, to successfully use Bayesian nonparametric models to analyse the high-dimensional and structured datasets now commonly encountered in the age of Big Data, we will have to overcome a number of challenges. Namely, we need to develop Bayesian nonparametric models that can learn rich representations from structured data, and we need computational methodologies that can scale effectively to the large and complex models of the future. We will ground our developments in relevant applications, particularly to natural language processing (learning distributed representations for language modelling and compositional semantics) and genetics (modelling genetic variations arising from population, genealogical and spatial structures).
Max ERC Funding
1 918 092 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym BPI
Project Bayesian Peer Influence: Group Beliefs, Polarisation and Segregation
Researcher (PI) Gilat Levy
Host Institution (HI) LONDON SCHOOL OF ECONOMICS AND POLITICAL SCIENCE
Country United Kingdom
Call Details Consolidator Grant (CoG), SH1, ERC-2015-CoG
Summary "The objective of this research agenda is to provide a new framework to model and analyze dynamics of group beliefs, in order to study phenomena such as group polarization, segregation and inter-group discrimination. We introduce a simple new heuristic, the Bayesian Peer Influence heuristic (BPI), which is based on rational foundations and captures how individuals are influenced by others' beliefs. We will explore the theoretical properties of this heuristic, and apply the model to analyze the implications of belief dynamics on social interactions.
Understanding the formation and evolution of beliefs in groups is an important aspect of many economic applications, such as labour market discrimination. The beliefs that different groups of people have about members of other groups should be central to any theory or empirical investigation of this topic. At the same time, economic models of segregation and discrimination typically do not focus on the evolution and dynamics of group beliefs that allow for such phenomena. There is therefore a need for new tools of analysis for incorporating the dynamics of group beliefs; this is particularly important in order to understand the full implications of policy interventions which often intend to ""educate the public''. The BPI fills this gap in the literature by offering a tractable and natural heuristic for group communication.
Our aim is to study the theoretical properties of the BPI, as well as its applications to the dynamics of group behavior. Our plan is to: (i) Analyze rational learning from others’ beliefs and characterise the BPI. (ii) Use the BPI to account for cognitive biases in information processing. (iii) Use the BPI to analyze the diffusion of beliefs in social networks. (iv) Apply the BPI to understand the relation between belief polarization, segregation in education and labour market discrimination. (v) Apply the BPI to understand the relation between belief polarization and political outcomes."
Summary
"The objective of this research agenda is to provide a new framework to model and analyze dynamics of group beliefs, in order to study phenomena such as group polarization, segregation and inter-group discrimination. We introduce a simple new heuristic, the Bayesian Peer Influence heuristic (BPI), which is based on rational foundations and captures how individuals are influenced by others' beliefs. We will explore the theoretical properties of this heuristic, and apply the model to analyze the implications of belief dynamics on social interactions.
Understanding the formation and evolution of beliefs in groups is an important aspect of many economic applications, such as labour market discrimination. The beliefs that different groups of people have about members of other groups should be central to any theory or empirical investigation of this topic. At the same time, economic models of segregation and discrimination typically do not focus on the evolution and dynamics of group beliefs that allow for such phenomena. There is therefore a need for new tools of analysis for incorporating the dynamics of group beliefs; this is particularly important in order to understand the full implications of policy interventions which often intend to ""educate the public''. The BPI fills this gap in the literature by offering a tractable and natural heuristic for group communication.
Our aim is to study the theoretical properties of the BPI, as well as its applications to the dynamics of group behavior. Our plan is to: (i) Analyze rational learning from others’ beliefs and characterise the BPI. (ii) Use the BPI to account for cognitive biases in information processing. (iii) Use the BPI to analyze the diffusion of beliefs in social networks. (iv) Apply the BPI to understand the relation between belief polarization, segregation in education and labour market discrimination. (v) Apply the BPI to understand the relation between belief polarization and political outcomes."
Max ERC Funding
1 662 942 €
Duration
Start date: 2016-08-01, End date: 2022-01-31
Project acronym BroadSem
Project Induction of Broad-Coverage Semantic Parsers
Researcher (PI) Ivan Titov
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Country United Kingdom
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary In the last one or two decades, language technology has achieved a number of important successes, for example, producing functional machine translation systems and beating humans in quiz games. The key bottleneck which prevents further progress in these and many other natural language processing (NLP) applications (e.g., text summarization, information retrieval, opinion mining, dialog and tutoring systems) is the lack of accurate methods for producing meaning representations of texts. Accurately predicting such meaning representations on an open domain with an automatic parser is a challenging and unsolved problem, primarily because of language variability and ambiguity. The reason for the unsatisfactory performance is reliance on supervised learning (learning from annotated resources), with the amounts of annotation required for accurate open-domain parsing exceeding what is practically feasible. Moreover, representations defined in these resources typically do not provide abstractions suitable for reasoning.
In this project, we will induce semantic representations from large amounts of unannotated data (i.e. text which has not been labeled by humans) while guided by information contained in human-annotated data and other forms of linguistic knowledge. This will allow us to scale our approach to many domains and across languages. We will specialize meaning representations for reasoning by modeling relations (e.g., facts) appearing across sentences in texts (document-level modeling), across different texts, and across texts and knowledge bases. Learning to predict this linked data is closely related to learning to reason, including learning the notions of semantic equivalence and entailment. We will jointly induce semantic parsers (e.g., log-linear feature-rich models) and reasoning models (latent factor models) relying on this data, thus, ensuring that the semantic representations are informative for applications requiring reasoning.
Summary
In the last one or two decades, language technology has achieved a number of important successes, for example, producing functional machine translation systems and beating humans in quiz games. The key bottleneck which prevents further progress in these and many other natural language processing (NLP) applications (e.g., text summarization, information retrieval, opinion mining, dialog and tutoring systems) is the lack of accurate methods for producing meaning representations of texts. Accurately predicting such meaning representations on an open domain with an automatic parser is a challenging and unsolved problem, primarily because of language variability and ambiguity. The reason for the unsatisfactory performance is reliance on supervised learning (learning from annotated resources), with the amounts of annotation required for accurate open-domain parsing exceeding what is practically feasible. Moreover, representations defined in these resources typically do not provide abstractions suitable for reasoning.
In this project, we will induce semantic representations from large amounts of unannotated data (i.e. text which has not been labeled by humans) while guided by information contained in human-annotated data and other forms of linguistic knowledge. This will allow us to scale our approach to many domains and across languages. We will specialize meaning representations for reasoning by modeling relations (e.g., facts) appearing across sentences in texts (document-level modeling), across different texts, and across texts and knowledge bases. Learning to predict this linked data is closely related to learning to reason, including learning the notions of semantic equivalence and entailment. We will jointly induce semantic parsers (e.g., log-linear feature-rich models) and reasoning models (latent factor models) relying on this data, thus, ensuring that the semantic representations are informative for applications requiring reasoning.
Max ERC Funding
1 457 185 €
Duration
Start date: 2016-05-01, End date: 2021-10-31