Project acronym 3D-REPAIR
Project Spatial organization of DNA repair within the nucleus
Researcher (PI) Evanthia Soutoglou
Host Institution (HI) THE UNIVERSITY OF SUSSEX
Country United Kingdom
Call Details Consolidator Grant (CoG), LS2, ERC-2015-CoG
Summary Faithful repair of double stranded DNA breaks (DSBs) is essential, as they are at the origin of genome instability, chromosomal translocations and cancer. Cells repair DSBs through different pathways, which can be faithful or mutagenic, and the balance between them at a given locus must be tightly regulated to preserve genome integrity. Although, much is known about DSB repair factors, how the choice between pathways is controlled within the nuclear environment is not understood. We have shown that nuclear architecture and non-random genome organization determine the frequency of chromosomal translocations and that pathway choice is dictated by the spatial organization of DNA in the nucleus. Nevertheless, what determines which pathway is activated in response to DSBs at specific genomic locations is not understood. Furthermore, the impact of 3D-genome folding on the kinetics and efficiency of DSB repair is completely unknown.
Here we aim to understand how nuclear compartmentalization, chromatin structure and genome organization impact on the efficiency of detection, signaling and repair of DSBs. We will unravel what determines the DNA repair specificity within distinct nuclear compartments using protein tethering, promiscuous biotinylation and quantitative proteomics. We will determine how DNA repair is orchestrated at different heterochromatin structures using a CRISPR/Cas9-based system that allows, for the first time robust induction of DSBs at specific heterochromatin compartments. Finally, we will investigate the role of 3D-genome folding in the kinetics of DNA repair and pathway choice using single nucleotide resolution DSB-mapping coupled to 3D-topological maps.
This proposal has significant implications for understanding the mechanisms controlling DNA repair within the nuclear environment and will reveal the regions of the genome that are susceptible to genomic instability and help us understand why certain mutations and translocations are recurrent in cancer
Summary
Faithful repair of double stranded DNA breaks (DSBs) is essential, as they are at the origin of genome instability, chromosomal translocations and cancer. Cells repair DSBs through different pathways, which can be faithful or mutagenic, and the balance between them at a given locus must be tightly regulated to preserve genome integrity. Although, much is known about DSB repair factors, how the choice between pathways is controlled within the nuclear environment is not understood. We have shown that nuclear architecture and non-random genome organization determine the frequency of chromosomal translocations and that pathway choice is dictated by the spatial organization of DNA in the nucleus. Nevertheless, what determines which pathway is activated in response to DSBs at specific genomic locations is not understood. Furthermore, the impact of 3D-genome folding on the kinetics and efficiency of DSB repair is completely unknown.
Here we aim to understand how nuclear compartmentalization, chromatin structure and genome organization impact on the efficiency of detection, signaling and repair of DSBs. We will unravel what determines the DNA repair specificity within distinct nuclear compartments using protein tethering, promiscuous biotinylation and quantitative proteomics. We will determine how DNA repair is orchestrated at different heterochromatin structures using a CRISPR/Cas9-based system that allows, for the first time robust induction of DSBs at specific heterochromatin compartments. Finally, we will investigate the role of 3D-genome folding in the kinetics of DNA repair and pathway choice using single nucleotide resolution DSB-mapping coupled to 3D-topological maps.
This proposal has significant implications for understanding the mechanisms controlling DNA repair within the nuclear environment and will reveal the regions of the genome that are susceptible to genomic instability and help us understand why certain mutations and translocations are recurrent in cancer
Max ERC Funding
1 999 750 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym ALUNIF
Project Algorithms and Lower Bounds: A Unified Approach
Researcher (PI) Rahul Santhanam
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.
Summary
One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.
Max ERC Funding
1 274 496 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym ALZSYN
Project Imaging synaptic contributors to dementia
Researcher (PI) Tara Spires-Jones
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Country United Kingdom
Call Details Consolidator Grant (CoG), LS5, ERC-2015-CoG
Summary Alzheimer's disease, the most common cause of dementia in older people, is a devastating condition that is becoming a public health crisis as our population ages. Despite great progress recently in Alzheimer’s disease research, we have no disease modifying drugs and a decade with a 99.6% failure rate of clinical trials attempting to treat the disease. This project aims to develop relevant therapeutic targets to restore brain function in Alzheimer’s disease by integrating human and model studies of synapses. It is widely accepted in the field that alterations in amyloid beta initiate the disease process. However the cascade leading from changes in amyloid to widespread tau pathology and neurodegeneration remain unclear. Synapse loss is the strongest pathological correlate of dementia in Alzheimer’s, and mounting evidence suggests that synapse degeneration plays a key role in causing cognitive decline. Here I propose to test the hypothesis that the amyloid cascade begins at the synapse leading to tau pathology, synapse dysfunction and loss, and ultimately neural circuit collapse causing cognitive impairment. The team will use cutting-edge multiphoton and array tomography imaging techniques to test mechanisms downstream of amyloid beta at synapses, and determine whether intervening in the cascade allows recovery of synapse structure and function. Importantly, I will combine studies in robust models of familial Alzheimer’s disease with studies in postmortem human brain to confirm relevance of our mechanistic studies to human disease. Finally, human stem cell derived neurons will be used to test mechanisms and potential therapeutics in neurons expressing the human proteome. Together, these experiments are ground-breaking since they have the potential to further our understanding of how synapses are lost in Alzheimer’s disease and to identify targets for effective therapeutic intervention, which is a critical unmet need in today’s health care system.
Summary
Alzheimer's disease, the most common cause of dementia in older people, is a devastating condition that is becoming a public health crisis as our population ages. Despite great progress recently in Alzheimer’s disease research, we have no disease modifying drugs and a decade with a 99.6% failure rate of clinical trials attempting to treat the disease. This project aims to develop relevant therapeutic targets to restore brain function in Alzheimer’s disease by integrating human and model studies of synapses. It is widely accepted in the field that alterations in amyloid beta initiate the disease process. However the cascade leading from changes in amyloid to widespread tau pathology and neurodegeneration remain unclear. Synapse loss is the strongest pathological correlate of dementia in Alzheimer’s, and mounting evidence suggests that synapse degeneration plays a key role in causing cognitive decline. Here I propose to test the hypothesis that the amyloid cascade begins at the synapse leading to tau pathology, synapse dysfunction and loss, and ultimately neural circuit collapse causing cognitive impairment. The team will use cutting-edge multiphoton and array tomography imaging techniques to test mechanisms downstream of amyloid beta at synapses, and determine whether intervening in the cascade allows recovery of synapse structure and function. Importantly, I will combine studies in robust models of familial Alzheimer’s disease with studies in postmortem human brain to confirm relevance of our mechanistic studies to human disease. Finally, human stem cell derived neurons will be used to test mechanisms and potential therapeutics in neurons expressing the human proteome. Together, these experiments are ground-breaking since they have the potential to further our understanding of how synapses are lost in Alzheimer’s disease and to identify targets for effective therapeutic intervention, which is a critical unmet need in today’s health care system.
Max ERC Funding
2 000 000 €
Duration
Start date: 2016-11-01, End date: 2021-10-31
Project acronym ARITHMUS
Project Peopling Europe: How data make a people
Researcher (PI) Evelyn Sharon Ruppert
Host Institution (HI) GOLDSMITHS' COLLEGE
Country United Kingdom
Call Details Consolidator Grant (CoG), SH3, ERC-2013-CoG
Summary Who are the people of Europe? This question is facing statisticians as they grapple with standardising national census methods so that their numbers can be assembled into a European population. Yet, by so doing—intentionally or otherwise—they also contribute to the making of a European people. This, at least, is the central thesis of ARITHMUS. While typically framed as a methodological or statistical problem, the project approaches this as a practical and political problem of assembling multiple national populations into a European population and people.
Why is this both an urgent political and practical problem? Politically, Europe is said to be unable to address itself to a constituted polity and people, which is crucial to European integration. Practically, its efforts to constitute a European population are also being challenged by digital technologies, which are being used to diversify census methods and bringing into question the comparability of national population data. Consequently, over the next several years Eurostat and national statistical institutes are negotiating regulations for the 2020 census round towards ensuring 'Europe-wide comparability.'
ARITHMUS will follow this process and investigate the practices of statisticians as they juggle scientific independence, national autonomy and EU comparability to innovate census methods. It will then connect this practical work to political questions of the making and governing of a European people and polity. It will do so by going beyond state-of-the art scholarship on methods, politics and science and technology studies. Five case studies involving discourse analysis and ethnographic methods will investigate the situated practices of EU and national statisticians as they remake census methods, arguably the most fundamental changes since modern censuses were launched over two centuries ago. At the same time it will attend to how these practices affect the constitution of who are the people of Europe.
Summary
Who are the people of Europe? This question is facing statisticians as they grapple with standardising national census methods so that their numbers can be assembled into a European population. Yet, by so doing—intentionally or otherwise—they also contribute to the making of a European people. This, at least, is the central thesis of ARITHMUS. While typically framed as a methodological or statistical problem, the project approaches this as a practical and political problem of assembling multiple national populations into a European population and people.
Why is this both an urgent political and practical problem? Politically, Europe is said to be unable to address itself to a constituted polity and people, which is crucial to European integration. Practically, its efforts to constitute a European population are also being challenged by digital technologies, which are being used to diversify census methods and bringing into question the comparability of national population data. Consequently, over the next several years Eurostat and national statistical institutes are negotiating regulations for the 2020 census round towards ensuring 'Europe-wide comparability.'
ARITHMUS will follow this process and investigate the practices of statisticians as they juggle scientific independence, national autonomy and EU comparability to innovate census methods. It will then connect this practical work to political questions of the making and governing of a European people and polity. It will do so by going beyond state-of-the art scholarship on methods, politics and science and technology studies. Five case studies involving discourse analysis and ethnographic methods will investigate the situated practices of EU and national statisticians as they remake census methods, arguably the most fundamental changes since modern censuses were launched over two centuries ago. At the same time it will attend to how these practices affect the constitution of who are the people of Europe.
Max ERC Funding
1 833 649 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym ASYFAIR
Project Fair and Consistent Border Controls? A Critical, Multi-methodological and Interdisciplinary Study of Asylum Adjudication in Europe
Researcher (PI) Nicholas Mark Gill
Host Institution (HI) THE UNIVERSITY OF EXETER
Country United Kingdom
Call Details Starting Grant (StG), SH3, ERC-2015-STG
Summary ‘Consistency’ is regularly cited as a desirable attribute of border control, but it has received little critical social scientific attention. This inter-disciplinary project, at the inter-face between critical human geography, border studies and law, will scrutinise the consistency of European asylum adjudication in order to develop richer theoretical understanding of this lynchpin concept. It will move beyond the administrative legal concepts of substantive and procedural consistency by advancing a three-fold conceptualisation of consistency – as everyday practice, discursive deployment of facts and disciplinary technique. In order to generate productive intellectual tension it will also employ an explicitly antagonistic conceptualisation of the relationship between geography and law that views law as seeking to constrain and systematise lived space. The project will employ an innovative combination of methodologies that will produce unique and rich data sets including quantitative analysis, multi-sited legal ethnography, discourse analysis and interviews, and the findings are likely to be of interest both to academic communities like geographers, legal and border scholars and to policy makers and activists working in border control settings. In 2013 the Common European Asylum System (CEAS) was launched to standardise the procedures of asylum determination. But as yet no sustained multi-methodological assessment of the claims of consistency inherent to the CEAS has been carried out. This project offers not only the opportunity to assess progress towards harmonisation of asylum determination processes in Europe, but will also provide a new conceptual framework with which to approach the dilemmas and risks of inconsistency in an area of law fraught with political controversy and uncertainty around the world. Most fundamentally, the project promises to debunk the myths surrounding the possibility of fair and consistent border controls in Europe and elsewhere.
Summary
‘Consistency’ is regularly cited as a desirable attribute of border control, but it has received little critical social scientific attention. This inter-disciplinary project, at the inter-face between critical human geography, border studies and law, will scrutinise the consistency of European asylum adjudication in order to develop richer theoretical understanding of this lynchpin concept. It will move beyond the administrative legal concepts of substantive and procedural consistency by advancing a three-fold conceptualisation of consistency – as everyday practice, discursive deployment of facts and disciplinary technique. In order to generate productive intellectual tension it will also employ an explicitly antagonistic conceptualisation of the relationship between geography and law that views law as seeking to constrain and systematise lived space. The project will employ an innovative combination of methodologies that will produce unique and rich data sets including quantitative analysis, multi-sited legal ethnography, discourse analysis and interviews, and the findings are likely to be of interest both to academic communities like geographers, legal and border scholars and to policy makers and activists working in border control settings. In 2013 the Common European Asylum System (CEAS) was launched to standardise the procedures of asylum determination. But as yet no sustained multi-methodological assessment of the claims of consistency inherent to the CEAS has been carried out. This project offers not only the opportunity to assess progress towards harmonisation of asylum determination processes in Europe, but will also provide a new conceptual framework with which to approach the dilemmas and risks of inconsistency in an area of law fraught with political controversy and uncertainty around the world. Most fundamentally, the project promises to debunk the myths surrounding the possibility of fair and consistent border controls in Europe and elsewhere.
Max ERC Funding
1 252 067 €
Duration
Start date: 2016-09-01, End date: 2022-02-28
Project acronym BAYES-KNOWLEDGE
Project Effective Bayesian Modelling with Knowledge before Data
Researcher (PI) Norman Fenton
Host Institution (HI) QUEEN MARY UNIVERSITY OF LONDON
Country United Kingdom
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary This project aims to improve evidence-based decision-making. What makes it radical is that it plans to do this in situations (common for critical risk assessment problems) where there is little or even no data, and hence where traditional statistics cannot be used. To address this problem Bayesian analysis, which enables domain experts to supplement observed data with subjective probabilities, is normally used. As real-world problems typically involve multiple uncertain variables, Bayesian analysis is extended using a technique called Bayesian networks (BNs). But, despite many great benefits, BNs have been under-exploited, especially in areas where they offer the greatest potential for improvements (law, medicine and systems engineering). This is mainly because of widespread resistance to relying on subjective knowledge. To address this problem much current research assumes sufficient data are available to make the expert’s input minimal or even redundant; with such data it may be possible to ‘learn’ the underlying BN model. But this approach offers nothing when there is limited or no data. Even when ‘big’ data are available the resulting models may be superficially objective but fundamentally flawed as they fail to capture the underlying causal structure that only expert knowledge can provide.
Our solution is to develop a method to systemize the way expert driven causal BN models can be built and used effectively either in the absence of data or as a means of determining what future data is really required. The method involves a new way of framing problems and extensions to BN theory, notation and tools. Working with relevant domain experts, along with cognitive psychologists, our methods will be developed and tested experimentally on real-world critical decision-problems in medicine, law, forensics, and transport. As the work complements current data-driven approaches, it will lead to improved BN modelling both when there is extensive data as well as none.
Summary
This project aims to improve evidence-based decision-making. What makes it radical is that it plans to do this in situations (common for critical risk assessment problems) where there is little or even no data, and hence where traditional statistics cannot be used. To address this problem Bayesian analysis, which enables domain experts to supplement observed data with subjective probabilities, is normally used. As real-world problems typically involve multiple uncertain variables, Bayesian analysis is extended using a technique called Bayesian networks (BNs). But, despite many great benefits, BNs have been under-exploited, especially in areas where they offer the greatest potential for improvements (law, medicine and systems engineering). This is mainly because of widespread resistance to relying on subjective knowledge. To address this problem much current research assumes sufficient data are available to make the expert’s input minimal or even redundant; with such data it may be possible to ‘learn’ the underlying BN model. But this approach offers nothing when there is limited or no data. Even when ‘big’ data are available the resulting models may be superficially objective but fundamentally flawed as they fail to capture the underlying causal structure that only expert knowledge can provide.
Our solution is to develop a method to systemize the way expert driven causal BN models can be built and used effectively either in the absence of data or as a means of determining what future data is really required. The method involves a new way of framing problems and extensions to BN theory, notation and tools. Working with relevant domain experts, along with cognitive psychologists, our methods will be developed and tested experimentally on real-world critical decision-problems in medicine, law, forensics, and transport. As the work complements current data-driven approaches, it will lead to improved BN modelling both when there is extensive data as well as none.
Max ERC Funding
1 572 562 €
Duration
Start date: 2014-04-01, End date: 2018-03-31
Project acronym BAYNET
Project Bayesian Networks and Non-Rational Expectations
Researcher (PI) Ran SPIEGLER
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Country United Kingdom
Call Details Advanced Grant (AdG), SH1, ERC-2015-AdG
Summary "This project will develop a new framework for modeling economic agents having ""boundedly rational expectations"" (BRE). It is based on the concept of Bayesian networks (more generally, graphical models), borrowed from statistics and AI. In the framework's basic version, an agent is characterized by a directed acyclic graph (DAG) over the set of all relevant random variables. The DAG is the agent's ""type"" – it represents how he systematically distorts any objective probability distribution into a subjective belief. Technically, the distortion takes the form of the standard Bayesian-network factorization formula given by the agent's DAG. The agent's choice is modeled as a ""personal equilibrium"", because his subjective belief regarding the implications of his actions can vary with his own long-run behavior. The DAG representation unifies and simplifies existing models of BRE, subsuming them as special cases corresponding to distinct graphical representations. It captures hitherto-unmodeled fallacies such as reverse causation. The framework facilitates behavioral characterizations of general classes of models of BRE and expands their applicability. I will demonstrate this with applications to monetary policy, behavioral I.O., asset pricing, etc. I will extend the basic formalism to multi-agent environments, addressing issues beyond the reach of current models of BRE (e.g., formalizing the notion of ""high-order"" limited understanding of statistical regularities). Finally, I will seek a learning foundation for the graphical representation of BRE, in the sense that it will capture how the agent extrapolates his belief from a dataset (drawn from the objective distribution) containing ""missing values"", via some intuitive ""imputation method"". This part, too, borrows ideas from statistics and AI, further demonstrating the project's interdisciplinary nature."
Summary
"This project will develop a new framework for modeling economic agents having ""boundedly rational expectations"" (BRE). It is based on the concept of Bayesian networks (more generally, graphical models), borrowed from statistics and AI. In the framework's basic version, an agent is characterized by a directed acyclic graph (DAG) over the set of all relevant random variables. The DAG is the agent's ""type"" – it represents how he systematically distorts any objective probability distribution into a subjective belief. Technically, the distortion takes the form of the standard Bayesian-network factorization formula given by the agent's DAG. The agent's choice is modeled as a ""personal equilibrium"", because his subjective belief regarding the implications of his actions can vary with his own long-run behavior. The DAG representation unifies and simplifies existing models of BRE, subsuming them as special cases corresponding to distinct graphical representations. It captures hitherto-unmodeled fallacies such as reverse causation. The framework facilitates behavioral characterizations of general classes of models of BRE and expands their applicability. I will demonstrate this with applications to monetary policy, behavioral I.O., asset pricing, etc. I will extend the basic formalism to multi-agent environments, addressing issues beyond the reach of current models of BRE (e.g., formalizing the notion of ""high-order"" limited understanding of statistical regularities). Finally, I will seek a learning foundation for the graphical representation of BRE, in the sense that it will capture how the agent extrapolates his belief from a dataset (drawn from the objective distribution) containing ""missing values"", via some intuitive ""imputation method"". This part, too, borrows ideas from statistics and AI, further demonstrating the project's interdisciplinary nature."
Max ERC Funding
1 379 288 €
Duration
Start date: 2016-07-01, End date: 2022-06-30
Project acronym BEEHIVE
Project Bridging the Evolution and Epidemiology of HIV in Europe
Researcher (PI) Christopher Fraser
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Advanced Grant (AdG), LS2, ERC-2013-ADG
Summary The aim of the BEEHIVE project is to generate novel insight into HIV biology, evolution and epidemiology, leveraging next-generation high-throughput sequencing and bioinformatics to produce and analyse whole-genomes of viruses from approximately 3,000 European HIV-1 infected patients. These patients have known dates of infection spread over the last 25 years, good clinical follow up, and a wide range of clinical prognostic indicators and outcomes. The primary objective is to discover the viral genetic determinants of severity of infection and set-point viral load. This primary objective is high-risk & blue-skies: there is ample indirect evidence of polymorphisms that alter virulence, but they have never been identified, and it is not known how easy they are to discover. However, the project is also high-reward: it could lead to a substantial shift in the understanding of HIV disease.
Technologically, the BEEHIVE project will deliver new approaches for undertaking whole genome association studies on RNA viruses, including delivering an innovative high-throughput bioinformatics pipeline for handling genetically diverse viral quasi-species data (with viral diversity both within and between infected patients).
The project also includes secondary and tertiary objectives that address critical open questions in HIV epidemiology and evolution. The secondary objective is to use viral genetic sequences allied to mathematical epidemic models to better understand the resurgent European epidemic amongst high-risk groups, especially men who have sex with men. The aim will not just be to establish who is at risk of infection, which is known from conventional epidemiological approaches, but also to characterise the risk factors for onwards transmission of the virus. Tertiary objectives involve understanding the relationship between the genetic diversity within viral samples, indicative of on-going evolution or dual infections, to clinical outcomes.
Summary
The aim of the BEEHIVE project is to generate novel insight into HIV biology, evolution and epidemiology, leveraging next-generation high-throughput sequencing and bioinformatics to produce and analyse whole-genomes of viruses from approximately 3,000 European HIV-1 infected patients. These patients have known dates of infection spread over the last 25 years, good clinical follow up, and a wide range of clinical prognostic indicators and outcomes. The primary objective is to discover the viral genetic determinants of severity of infection and set-point viral load. This primary objective is high-risk & blue-skies: there is ample indirect evidence of polymorphisms that alter virulence, but they have never been identified, and it is not known how easy they are to discover. However, the project is also high-reward: it could lead to a substantial shift in the understanding of HIV disease.
Technologically, the BEEHIVE project will deliver new approaches for undertaking whole genome association studies on RNA viruses, including delivering an innovative high-throughput bioinformatics pipeline for handling genetically diverse viral quasi-species data (with viral diversity both within and between infected patients).
The project also includes secondary and tertiary objectives that address critical open questions in HIV epidemiology and evolution. The secondary objective is to use viral genetic sequences allied to mathematical epidemic models to better understand the resurgent European epidemic amongst high-risk groups, especially men who have sex with men. The aim will not just be to establish who is at risk of infection, which is known from conventional epidemiological approaches, but also to characterise the risk factors for onwards transmission of the virus. Tertiary objectives involve understanding the relationship between the genetic diversity within viral samples, indicative of on-going evolution or dual infections, to clinical outcomes.
Max ERC Funding
2 499 739 €
Duration
Start date: 2014-04-01, End date: 2019-03-31
Project acronym BESTDECISION
Project "Behavioural Economics and Strategic Decision Making: Theory, Empirics, and Experiments"
Researcher (PI) Vincent Paul Crawford
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Advanced Grant (AdG), SH1, ERC-2013-ADG
Summary "I will study questions of central microeconomic importance via interwoven theoretical, empirical, and experimental analyses, from a behavioural perspective combining standard methods with assumptions that better reflect evidence on behaviour and psychological insights. The contributions of behavioural economics have been widely recognized, but the benefits of its insights are far from fully realized. I propose four lines of inquiry that focus on how institutions interact with cognition and behaviour, chosen for their potential to reshape our understanding of important questions and their synergies across lines.
The first line will study nonparametric identification and estimation of reference-dependent versions of the standard microeconomic model of consumer demand or labour supply, the subject of hundreds of empirical studies and perhaps the single most important model in microeconomics. It will allow such studies to consider relevant behavioural factors without imposing structural assumptions as in previous work.
The second line will analyze history-dependent learning in financial crises theoretically and experimentally, with the goal of quantifying how market structure influences the likelihood of a crisis.
The third line will study strategic thinking experimentally, using a powerful new design that links subjects’ searches for hidden payoff information (“eye-movements”) much more directly to thinking.
The fourth line will significantly advance Myerson and Satterthwaite’s analyses of optimal design of bargaining rules and auctions, which first went beyond the analysis of given institutions to study what is possible by designing new institutions, replacing their equilibrium assumption with a nonequilibrium model that is well supported by experiments.
The synergies among these four lines’ theoretical analyses, empirical methods, and data analyses will accelerate progress on each line well beyond what would be possible in a piecemeal approach."
Summary
"I will study questions of central microeconomic importance via interwoven theoretical, empirical, and experimental analyses, from a behavioural perspective combining standard methods with assumptions that better reflect evidence on behaviour and psychological insights. The contributions of behavioural economics have been widely recognized, but the benefits of its insights are far from fully realized. I propose four lines of inquiry that focus on how institutions interact with cognition and behaviour, chosen for their potential to reshape our understanding of important questions and their synergies across lines.
The first line will study nonparametric identification and estimation of reference-dependent versions of the standard microeconomic model of consumer demand or labour supply, the subject of hundreds of empirical studies and perhaps the single most important model in microeconomics. It will allow such studies to consider relevant behavioural factors without imposing structural assumptions as in previous work.
The second line will analyze history-dependent learning in financial crises theoretically and experimentally, with the goal of quantifying how market structure influences the likelihood of a crisis.
The third line will study strategic thinking experimentally, using a powerful new design that links subjects’ searches for hidden payoff information (“eye-movements”) much more directly to thinking.
The fourth line will significantly advance Myerson and Satterthwaite’s analyses of optimal design of bargaining rules and auctions, which first went beyond the analysis of given institutions to study what is possible by designing new institutions, replacing their equilibrium assumption with a nonequilibrium model that is well supported by experiments.
The synergies among these four lines’ theoretical analyses, empirical methods, and data analyses will accelerate progress on each line well beyond what would be possible in a piecemeal approach."
Max ERC Funding
1 985 373 €
Duration
Start date: 2014-04-01, End date: 2019-03-31
Project acronym BIGBAYES
Project Rich, Structured and Efficient Learning of Big Bayesian Models
Researcher (PI) Yee Whye Teh
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary As datasets grow ever larger in scale, complexity and variety, there is an increasing need for powerful machine learning and statistical techniques that are capable of learning from such data. Bayesian nonparametrics is a promising approach to data analysis that is increasingly popular in machine learning and statistics. Bayesian nonparametric models are highly flexible models with infinite-dimensional parameter spaces that can be used to directly parameterise and learn about functions, densities, conditional distributions etc, and have been successfully applied to regression, survival analysis, language modelling, time series analysis, and visual scene analysis among others. However, to successfully use Bayesian nonparametric models to analyse the high-dimensional and structured datasets now commonly encountered in the age of Big Data, we will have to overcome a number of challenges. Namely, we need to develop Bayesian nonparametric models that can learn rich representations from structured data, and we need computational methodologies that can scale effectively to the large and complex models of the future. We will ground our developments in relevant applications, particularly to natural language processing (learning distributed representations for language modelling and compositional semantics) and genetics (modelling genetic variations arising from population, genealogical and spatial structures).
Summary
As datasets grow ever larger in scale, complexity and variety, there is an increasing need for powerful machine learning and statistical techniques that are capable of learning from such data. Bayesian nonparametrics is a promising approach to data analysis that is increasingly popular in machine learning and statistics. Bayesian nonparametric models are highly flexible models with infinite-dimensional parameter spaces that can be used to directly parameterise and learn about functions, densities, conditional distributions etc, and have been successfully applied to regression, survival analysis, language modelling, time series analysis, and visual scene analysis among others. However, to successfully use Bayesian nonparametric models to analyse the high-dimensional and structured datasets now commonly encountered in the age of Big Data, we will have to overcome a number of challenges. Namely, we need to develop Bayesian nonparametric models that can learn rich representations from structured data, and we need computational methodologies that can scale effectively to the large and complex models of the future. We will ground our developments in relevant applications, particularly to natural language processing (learning distributed representations for language modelling and compositional semantics) and genetics (modelling genetic variations arising from population, genealogical and spatial structures).
Max ERC Funding
1 918 092 €
Duration
Start date: 2014-05-01, End date: 2019-04-30