Project acronym ACCORD
Project Algorithms for Complex Collective Decisions on Structured Domains
Researcher (PI) Edith Elkind
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Algorithms for Complex Collective Decisions on Structured Domains.
The aim of this proposal is to substantially advance the field of Computational Social Choice, by developing new tools and methodologies that can be used for making complex group decisions in rich and structured environments. We consider settings where each member of a decision-making body has preferences over a finite set of alternatives, and the goal is to synthesise a collective preference over these alternatives, which may take the form of a partial order over the set of alternatives with a predefined structure: examples include selecting a fixed-size set of alternatives, a ranking of the alternatives, a winner and up to two runner-ups, etc. We will formulate desiderata that apply to such preference aggregation procedures, design specific procedures that satisfy as many of these desiderata as possible, and develop efficient algorithms for computing them. As the latter step may be infeasible on general preference domains, we will focus on identifying the least restrictive domains that enable efficient computation, and use real-life preference data to verify whether the associated restrictions are likely to be satisfied in realistic preference aggregation scenarios. Also, we will determine whether our preference aggregation procedures are computationally resistant to malicious behavior. To lower the cognitive burden on the decision-makers, we will extend our procedures to accept partial rankings as inputs. Finally, to further contribute towards bridging the gap between theory and practice of collective decision making, we will provide open-source software implementations of our procedures, and reach out to the potential users to obtain feedback on their practical applicability.
Summary
Algorithms for Complex Collective Decisions on Structured Domains.
The aim of this proposal is to substantially advance the field of Computational Social Choice, by developing new tools and methodologies that can be used for making complex group decisions in rich and structured environments. We consider settings where each member of a decision-making body has preferences over a finite set of alternatives, and the goal is to synthesise a collective preference over these alternatives, which may take the form of a partial order over the set of alternatives with a predefined structure: examples include selecting a fixed-size set of alternatives, a ranking of the alternatives, a winner and up to two runner-ups, etc. We will formulate desiderata that apply to such preference aggregation procedures, design specific procedures that satisfy as many of these desiderata as possible, and develop efficient algorithms for computing them. As the latter step may be infeasible on general preference domains, we will focus on identifying the least restrictive domains that enable efficient computation, and use real-life preference data to verify whether the associated restrictions are likely to be satisfied in realistic preference aggregation scenarios. Also, we will determine whether our preference aggregation procedures are computationally resistant to malicious behavior. To lower the cognitive burden on the decision-makers, we will extend our procedures to accept partial rankings as inputs. Finally, to further contribute towards bridging the gap between theory and practice of collective decision making, we will provide open-source software implementations of our procedures, and reach out to the potential users to obtain feedback on their practical applicability.
Max ERC Funding
1 395 933 €
Duration
Start date: 2015-07-01, End date: 2020-06-30
Project acronym BASTION
Project Leveraging Binary Analysis to Secure the Internet of Things
Researcher (PI) Thorsten Holz
Host Institution (HI) RUHR-UNIVERSITAET BOCHUM
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary We are in the midst of the shift towards the Internet of Things (IoT), where more and more (legacy) devices are connected to the Internet and communicate with each other. This paradigm shift brings new security challenges and unfortunately many current security solutions are not applicable anymore, e.g., because of a lack of clear network boundaries or resource-constrained devices. However, security plays a central role: In addition to its classical function in protecting against manipulation and fraud, it also enables novel applications and innovative business models.
We propose a research program that leverages binary analysis techniques to improve the security within the IoT. We concentrate on the software level since this enables us to both analyze a given device for potential security vulnerabilities and add security features to harden the device against future attacks. More specifically, we concentrate on the firmware (i.e., the combination of persistent memory together with program code and data that powers such devices) and develop novel mechanism for binary analysis of such software. We design an intermediate language to abstract away from the concrete assembly level and this enables an analysis of many different platforms within a unified analysis framework. We transfer and extend program analysis techniques such as control-/data-flow analysis or symbolic execution and apply them to our IL. Given this novel toolset, we can analyze security properties of a given firmware image (e.g., uncovering undocumented functionality and detecting memory corruption or logical vulnerabilities,). We also explore how to harden a firmware by retrofitting security mechanisms (e.g., adding control-flow integrity or automatically eliminating unnecessary functionality). This research will deepen our fundamental understanding of binary analysis methods and apply it to a novel area as it lays the foundations of performing this analysis on the level of intermediate languages.
Summary
We are in the midst of the shift towards the Internet of Things (IoT), where more and more (legacy) devices are connected to the Internet and communicate with each other. This paradigm shift brings new security challenges and unfortunately many current security solutions are not applicable anymore, e.g., because of a lack of clear network boundaries or resource-constrained devices. However, security plays a central role: In addition to its classical function in protecting against manipulation and fraud, it also enables novel applications and innovative business models.
We propose a research program that leverages binary analysis techniques to improve the security within the IoT. We concentrate on the software level since this enables us to both analyze a given device for potential security vulnerabilities and add security features to harden the device against future attacks. More specifically, we concentrate on the firmware (i.e., the combination of persistent memory together with program code and data that powers such devices) and develop novel mechanism for binary analysis of such software. We design an intermediate language to abstract away from the concrete assembly level and this enables an analysis of many different platforms within a unified analysis framework. We transfer and extend program analysis techniques such as control-/data-flow analysis or symbolic execution and apply them to our IL. Given this novel toolset, we can analyze security properties of a given firmware image (e.g., uncovering undocumented functionality and detecting memory corruption or logical vulnerabilities,). We also explore how to harden a firmware by retrofitting security mechanisms (e.g., adding control-flow integrity or automatically eliminating unnecessary functionality). This research will deepen our fundamental understanding of binary analysis methods and apply it to a novel area as it lays the foundations of performing this analysis on the level of intermediate languages.
Max ERC Funding
1 472 269 €
Duration
Start date: 2015-03-01, End date: 2020-02-29
Project acronym ChromArch
Project Single Molecule Mechanisms of Spatio-Temporal Chromatin Architecture
Researcher (PI) Johann Christof Manuel Gebhardt
Host Institution (HI) UNIVERSITAET ULM
Call Details Starting Grant (StG), LS1, ERC-2014-STG
Summary Chromatin packaging into the nucleus of eukaryotic cells is highly sophisticated. It not only serves to condense the genomic content into restricted space, but mainly to encode epigenetic traits ensuring temporally controlled and balanced transcription of genes and coordinated DNA replication and repair. The non-random three-dimensional chromatin architecture including looped structures between genomic control elements relies on the action of architectural proteins. However, despite increasing interest in spatio-temporal chromatin organization, mechanistic details of their contributions are not well understood.
With this proposal I aim at unveiling molecular mechanisms of protein–mediated chromatin organization by in vivo single molecule tracking and quantitative super-resolution imaging of architectural proteins using reflected light sheet microscopy (RLSM). I will measure the interaction dynamics, the spatial distribution and the stoichiometry of architectural proteins throughout the nucleus and at specific chromatin loci within single cells. In complement single molecule force spectroscopy experiments using magnetic tweezers (MT), I will study mechanisms of DNA loop formation in vitro by structure-mediating proteins.
Integrating these spatio-temporal and mechanical single molecule information, I will in the third sup-project measure the dynamics of relative end-to-end movements and the forces acting within a looped chromatin structure in living cells.
Taken together, my experiments will greatly enhance our mechanistic understanding of three-dimensional chromatin architecture and inspire future experiments on its regulatory effects on nuclear functions and potential therapeutic utility upon controlled modification.
Summary
Chromatin packaging into the nucleus of eukaryotic cells is highly sophisticated. It not only serves to condense the genomic content into restricted space, but mainly to encode epigenetic traits ensuring temporally controlled and balanced transcription of genes and coordinated DNA replication and repair. The non-random three-dimensional chromatin architecture including looped structures between genomic control elements relies on the action of architectural proteins. However, despite increasing interest in spatio-temporal chromatin organization, mechanistic details of their contributions are not well understood.
With this proposal I aim at unveiling molecular mechanisms of protein–mediated chromatin organization by in vivo single molecule tracking and quantitative super-resolution imaging of architectural proteins using reflected light sheet microscopy (RLSM). I will measure the interaction dynamics, the spatial distribution and the stoichiometry of architectural proteins throughout the nucleus and at specific chromatin loci within single cells. In complement single molecule force spectroscopy experiments using magnetic tweezers (MT), I will study mechanisms of DNA loop formation in vitro by structure-mediating proteins.
Integrating these spatio-temporal and mechanical single molecule information, I will in the third sup-project measure the dynamics of relative end-to-end movements and the forces acting within a looped chromatin structure in living cells.
Taken together, my experiments will greatly enhance our mechanistic understanding of three-dimensional chromatin architecture and inspire future experiments on its regulatory effects on nuclear functions and potential therapeutic utility upon controlled modification.
Max ERC Funding
1 486 578 €
Duration
Start date: 2015-05-01, End date: 2021-04-30
Project acronym CoPS
Project Coevolutionary Policy Search
Researcher (PI) Shimon Azariah Whiteson
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary I propose to develop a new class of decision-theoretic planning methods that overcome fundamental obstacles to the efficient optimization of autonomous agents. Creating agents that are effective in diverse settings is a key goal of artificial intelligence with enormous potential implications: robotic agents would be invaluable in homes, factories, and high-risk settings; software agents could revolutionize e-commerce, information retrieval, and traffic control.
The main challenge lies in specifying an agent's policy: the behavioral strategy that determines its actions. Since the complexity of realistic tasks makes manual policy construction hopeless, there is great demand for decision-theoretic planning methods that automatically discover good policies. Despite enormous progress, the grand challenge of efficiently discovering effective policies for complex tasks remains unmet.
A fundamental obstacle is the cost of policy evaluation: estimating a policy's quality by averaging performance over multiple trials. This cost grows quickly with increases in task complexity (making trials more expensive) or stochasticity (necessitating more trials).
To address this difficulty, I propose a new approach that simultaneously optimizes both policies and the manner in which those policies are evaluated. The key insight is that, in many tasks, many trials are wasted because they do not elicit the controllable rare events critical for distinguishing between policies. Thus, I will develop methods that leverage coevolution to automatically discover the best events, instead of sampling them randomly.
If successful, this project will greatly improve the efficiency of decision-theoretic planning and, in turn, help realize the potential of autonomous agents. In addition, by automatically identifying the most useful events, the resulting methods will help isolate critical factors in performance and thus yield new insights into what makes decision-theoretic problems hard.
Summary
I propose to develop a new class of decision-theoretic planning methods that overcome fundamental obstacles to the efficient optimization of autonomous agents. Creating agents that are effective in diverse settings is a key goal of artificial intelligence with enormous potential implications: robotic agents would be invaluable in homes, factories, and high-risk settings; software agents could revolutionize e-commerce, information retrieval, and traffic control.
The main challenge lies in specifying an agent's policy: the behavioral strategy that determines its actions. Since the complexity of realistic tasks makes manual policy construction hopeless, there is great demand for decision-theoretic planning methods that automatically discover good policies. Despite enormous progress, the grand challenge of efficiently discovering effective policies for complex tasks remains unmet.
A fundamental obstacle is the cost of policy evaluation: estimating a policy's quality by averaging performance over multiple trials. This cost grows quickly with increases in task complexity (making trials more expensive) or stochasticity (necessitating more trials).
To address this difficulty, I propose a new approach that simultaneously optimizes both policies and the manner in which those policies are evaluated. The key insight is that, in many tasks, many trials are wasted because they do not elicit the controllable rare events critical for distinguishing between policies. Thus, I will develop methods that leverage coevolution to automatically discover the best events, instead of sampling them randomly.
If successful, this project will greatly improve the efficiency of decision-theoretic planning and, in turn, help realize the potential of autonomous agents. In addition, by automatically identifying the most useful events, the resulting methods will help isolate critical factors in performance and thus yield new insights into what makes decision-theoretic problems hard.
Max ERC Funding
1 480 632 €
Duration
Start date: 2015-10-01, End date: 2021-09-30
Project acronym DASMT
Project Domain Adaptation for Statistical Machine Translation
Researcher (PI) Alexander Fraser
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Rapid translation between European languages is a cornerstone of good governance in the EU, and of great academic and commercial interest. Statistical approaches to machine translation constitute the state-of-the-art. The basic knowledge source is a parallel corpus, texts and their translations. For domains where large parallel corpora are available, such as the proceedings of the European Parliament, a high level of translation quality is reached. However, in countless other domains where large parallel corpora are not available, such as medical literature or legal decisions, translation quality is unacceptably poor.
Domain adaptation as a problem of statistical machine translation (SMT) is a relatively new research area, and there are no standard solutions. The literature contains inconsistent results and heuristics are widely used. We will solve the problem of domain adaptation for SMT on a larger scale than has been previously attempted, and base our results on standardized corpora and open source translation systems.
We will solve two basic problems. The first problem is determining how to benefit from large out-of-domain parallel corpora in domain-specific translation systems. This is an unsolved problem. The second problem is mining and appropriately weighting knowledge available from in-domain texts which are not parallel. While there is initial promising work on mining, weighting is not well studied, an omission which we will correct. We will scale mining by first using Wikipedia, and then mining from the entire web.
Our work will lead to a break-through in translation quality for the vast number of domains with less parallel text available, and have a direct impact on SMEs providing translation services. The academic impact of our work will be large because solutions to the challenge of domain adaptation apply to all natural language processing systems and in numerous other areas of artificial intelligence research based on machine learning approaches.
Summary
Rapid translation between European languages is a cornerstone of good governance in the EU, and of great academic and commercial interest. Statistical approaches to machine translation constitute the state-of-the-art. The basic knowledge source is a parallel corpus, texts and their translations. For domains where large parallel corpora are available, such as the proceedings of the European Parliament, a high level of translation quality is reached. However, in countless other domains where large parallel corpora are not available, such as medical literature or legal decisions, translation quality is unacceptably poor.
Domain adaptation as a problem of statistical machine translation (SMT) is a relatively new research area, and there are no standard solutions. The literature contains inconsistent results and heuristics are widely used. We will solve the problem of domain adaptation for SMT on a larger scale than has been previously attempted, and base our results on standardized corpora and open source translation systems.
We will solve two basic problems. The first problem is determining how to benefit from large out-of-domain parallel corpora in domain-specific translation systems. This is an unsolved problem. The second problem is mining and appropriately weighting knowledge available from in-domain texts which are not parallel. While there is initial promising work on mining, weighting is not well studied, an omission which we will correct. We will scale mining by first using Wikipedia, and then mining from the entire web.
Our work will lead to a break-through in translation quality for the vast number of domains with less parallel text available, and have a direct impact on SMEs providing translation services. The academic impact of our work will be large because solutions to the challenge of domain adaptation apply to all natural language processing systems and in numerous other areas of artificial intelligence research based on machine learning approaches.
Max ERC Funding
1 228 625 €
Duration
Start date: 2015-12-01, End date: 2020-11-30
Project acronym DNAendProtection
Project DNA end protection in Immunity and Cancer
Researcher (PI) Michela Di Virgilio
Host Institution (HI) MAX DELBRUECK CENTRUM FUER MOLEKULARE MEDIZIN IN DER HELMHOLTZ-GEMEINSCHAFT (MDC)
Call Details Starting Grant (StG), LS1, ERC-2014-STG
Summary This proposal addresses a fundamental issue in molecular biology: how is repair of DNA double-strand breaks (DSBs) steered towards the appropriate physiological outcome? DSBs are cytotoxic DNA lesions that arise as a by-product of DNA replication, but also as a physiological intermediate during antigen receptor diversification in the Immune system. DNA end processing is a major determinant of DSB repair outcome. Resection of DNA ends is a prerequisite for physiological repair of replication-associated breaks by homologous recombination, but detrimental for productive end-joining events during immunoglobulin class switch recombination (CSR) in B lymphocytes. Furthermore, inappropriate resection of DSBs can cause loss of genetic information and chromosome deletions, which are common features of cancer genomes.
The mechanisms that regulate the balance between DNA end resection and protection are poorly understood. Here, I propose to study the molecular machinery that mediates protection of DNA ends in primary B cells, and the end resection-promoting factors that are antagonized by this activity. We and others have shown that the DNA repair factor 53BP1 plays a crucial role in protecting DNA ends against resection, and consistent with this function, 53BP1 is essential for CSR, but also responsible for aberrant repair of replication-associated DNA damage. In Aims 1 and 2, we will test the hypothesis that dynamic interactions between multiple 53BP1 effectors mediate protection of DNA ends against resection. In Aim 3, we will define the landscape of end resection-promoting factors in mammalian cells via a high-throughput RNAi screen for rescue of CSR in 53BP1-deficient B cells. By elucidating the molecular mechanisms underlying DSB end processing in B lymphocytes, these studies will significantly advance our understanding of the molecular basis of immunodeficiencies and cancer predisposition in lymphoma and solid tumors.
Summary
This proposal addresses a fundamental issue in molecular biology: how is repair of DNA double-strand breaks (DSBs) steered towards the appropriate physiological outcome? DSBs are cytotoxic DNA lesions that arise as a by-product of DNA replication, but also as a physiological intermediate during antigen receptor diversification in the Immune system. DNA end processing is a major determinant of DSB repair outcome. Resection of DNA ends is a prerequisite for physiological repair of replication-associated breaks by homologous recombination, but detrimental for productive end-joining events during immunoglobulin class switch recombination (CSR) in B lymphocytes. Furthermore, inappropriate resection of DSBs can cause loss of genetic information and chromosome deletions, which are common features of cancer genomes.
The mechanisms that regulate the balance between DNA end resection and protection are poorly understood. Here, I propose to study the molecular machinery that mediates protection of DNA ends in primary B cells, and the end resection-promoting factors that are antagonized by this activity. We and others have shown that the DNA repair factor 53BP1 plays a crucial role in protecting DNA ends against resection, and consistent with this function, 53BP1 is essential for CSR, but also responsible for aberrant repair of replication-associated DNA damage. In Aims 1 and 2, we will test the hypothesis that dynamic interactions between multiple 53BP1 effectors mediate protection of DNA ends against resection. In Aim 3, we will define the landscape of end resection-promoting factors in mammalian cells via a high-throughput RNAi screen for rescue of CSR in 53BP1-deficient B cells. By elucidating the molecular mechanisms underlying DSB end processing in B lymphocytes, these studies will significantly advance our understanding of the molecular basis of immunodeficiencies and cancer predisposition in lymphoma and solid tumors.
Max ERC Funding
1 993 421 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym DTHPS
Project Sound and Materialism in the 19th Century
Researcher (PI) David John Trippett
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), SH5, ERC-2014-STG
Summary This research project aims to enlarge substantially our understanding of the dialogue between 19th-century music and natural science. It examines in particular how a scientific-materialist conception of sound was formed alongside a dominant culture of romantic idealism. Placing itself at the intersection of historical musicology and the history and philosophy of science, the project will investigate the view that musical sound, ostensibly the property of metaphysics, was also regarded by writers, composers, scientists and engineers as tangible, material and subject to physical laws; that scientific thinking was not anathema but—at key moments—intrinsic to music aesthetics and criticism; that philosophies of mind and theories of the creative process also drew on mechanical rules of causality and associative ‘laws’; and that the technological innovations brought about by scientific research—from steam trains to stethoscopes—were accompanied by new concepts and new ways of listening that radically impacted the sound world of composers, critics, and performers. It seeks, in short, to uncover for the first time a fully integrated view of the musical and scientific culture of the 19th century. The research will be broken down into four areas, each of which circumscribes a particular set of discourses: machines and mechanism; forms of nature; technologies for sound; and music medicalised. Drawing on a range of archival and printed sources in Great Britain, France and Germany, the project offers an innovative approach by examining historical soundscapes and new listening practices, by adopting a media perspective on scientific and musical instruments, and by investigating the interrelations between artistic sounds and non-artistic, industrial technologies. The cross-disciplinary research, divided between the PI and four postdoctoral scholars, will open up new interactions between music and materialism as a concealed site of knowledge and historically significant nexus.
Summary
This research project aims to enlarge substantially our understanding of the dialogue between 19th-century music and natural science. It examines in particular how a scientific-materialist conception of sound was formed alongside a dominant culture of romantic idealism. Placing itself at the intersection of historical musicology and the history and philosophy of science, the project will investigate the view that musical sound, ostensibly the property of metaphysics, was also regarded by writers, composers, scientists and engineers as tangible, material and subject to physical laws; that scientific thinking was not anathema but—at key moments—intrinsic to music aesthetics and criticism; that philosophies of mind and theories of the creative process also drew on mechanical rules of causality and associative ‘laws’; and that the technological innovations brought about by scientific research—from steam trains to stethoscopes—were accompanied by new concepts and new ways of listening that radically impacted the sound world of composers, critics, and performers. It seeks, in short, to uncover for the first time a fully integrated view of the musical and scientific culture of the 19th century. The research will be broken down into four areas, each of which circumscribes a particular set of discourses: machines and mechanism; forms of nature; technologies for sound; and music medicalised. Drawing on a range of archival and printed sources in Great Britain, France and Germany, the project offers an innovative approach by examining historical soundscapes and new listening practices, by adopting a media perspective on scientific and musical instruments, and by investigating the interrelations between artistic sounds and non-artistic, industrial technologies. The cross-disciplinary research, divided between the PI and four postdoctoral scholars, will open up new interactions between music and materialism as a concealed site of knowledge and historically significant nexus.
Max ERC Funding
1 496 345 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym FLEXILOG
Project Formal lexically informed logics for searching the web
Researcher (PI) Steven Schockaert
Host Institution (HI) CARDIFF UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Semantic search engines use structured knowledge to improve traditional web search, e.g. by directly answering questions from users. Current approaches to semantic search rely on the unrealistic assumption that all true facts about a given domain are explicitly stated in their knowledge base or on the web. To reach their full potential, semantic search engines need the ability to reason about known facts. However, existing logics cannot adequately deal with the imperfect nature of knowledge from the web. One problem is that relevant information tends to be distributed over several heterogeneous knowledge bases that are inconsistent with each other. Moreover, domain theories are seldom complete, which means that a form of so-called plausible reasoning is needed. Finally, as relevant logical theories do not exist for many domains, reasoning may need to rely on imperfect probabilistic theories that have been learned from the web.
To overcome these challenges, FLEXILOG will introduce a family of logics for robust reasoning with messy real-world knowledge, based on vector-space representations of natural language terms (i.e. of lexical knowledge). In particular, we will use lexical knowledge to estimate the plausibility of logical models, using conceptual simplicity as a proxy for plausibility (i.e. Occam’s razor). This will enable us to implement various forms of commonsense reasoning, equipping classical logic with the ability to draw plausible conclusions based on regularities that are observed in a knowledge base. We will then generalise our approach to probabilistic logics, and show how we can use the resulting lexically informed probabilistic logics to learn accurate and comprehensive domain theories from the web. This project will enable a robust data-driven approach to logic-based semantic search, and more generally lead to fundamental progress in a variety of knowledge-intensive applications for which logical inference has traditionally been too brittle.
Summary
Semantic search engines use structured knowledge to improve traditional web search, e.g. by directly answering questions from users. Current approaches to semantic search rely on the unrealistic assumption that all true facts about a given domain are explicitly stated in their knowledge base or on the web. To reach their full potential, semantic search engines need the ability to reason about known facts. However, existing logics cannot adequately deal with the imperfect nature of knowledge from the web. One problem is that relevant information tends to be distributed over several heterogeneous knowledge bases that are inconsistent with each other. Moreover, domain theories are seldom complete, which means that a form of so-called plausible reasoning is needed. Finally, as relevant logical theories do not exist for many domains, reasoning may need to rely on imperfect probabilistic theories that have been learned from the web.
To overcome these challenges, FLEXILOG will introduce a family of logics for robust reasoning with messy real-world knowledge, based on vector-space representations of natural language terms (i.e. of lexical knowledge). In particular, we will use lexical knowledge to estimate the plausibility of logical models, using conceptual simplicity as a proxy for plausibility (i.e. Occam’s razor). This will enable us to implement various forms of commonsense reasoning, equipping classical logic with the ability to draw plausible conclusions based on regularities that are observed in a knowledge base. We will then generalise our approach to probabilistic logics, and show how we can use the resulting lexically informed probabilistic logics to learn accurate and comprehensive domain theories from the web. This project will enable a robust data-driven approach to logic-based semantic search, and more generally lead to fundamental progress in a variety of knowledge-intensive applications for which logical inference has traditionally been too brittle.
Max ERC Funding
1 451 656 €
Duration
Start date: 2015-05-01, End date: 2020-04-30
Project acronym G4DSB
Project G-quadruplex DNA Structures and Genome Stability
Researcher (PI) Katrin Paeschke
Host Institution (HI) UNIVERSITAETSKLINIKUM BONN
Call Details Starting Grant (StG), LS1, ERC-2014-STG
Summary Secondary structures such as G-quadruplexes (G4s) can form within DNA or RNA. They pose a dramatic risk for genome stability, because due to their stability they can block DNA replication and this could lead to DNA breaks. In certain cancer cells mutations/deletions are observed at G4s, if a helicase that is important for G4 unwinding is mutated. Nevertheless, G4s are also discussed to be functional elements for cellular processes such as telomere protection, transcription, replication, and meiosis. The aim of this research proposal is to use various biochemical and computational tools to determine which proteins are essential for formation and regulation of G4s. Proposed experiments will gain insights into both “effects” of G4s, the risk for genome stability and its significant function for the cell. In aim 1 we will elucidate and identify novel proteins that bind, regulate, and repair G4s, especially in the absence of helicases, in vitro and in vivo. Our focus is to understand how G4s become mutated in the absence of helicases, which proteins are involved, and how genome stability is preserved. In aim 2, we will use cutting edge techniques to identify regions that form G4s in vivo. Although there is experimental proof for G4s in vivo, this is not commonly accepted, yet. We will provide solid data that will support the existence of G4s in vivo. Furthermore, we will survey genome-wide when and why G4s become a risk for genome stability. Aim 3 will focus on the in silico observation that G4 structures are connected to meiosis. In this aim we will use a combination of techniques to unravel the biological significance of G4s during meiosis in vivo.
Due to the connection of G4s and cancer the data obtained from this research proposal will not only be important to understand G4 regulation and formation, but will also provide unique knowledge on the impact of G4 structures for genome stability and thereby for human health.
Summary
Secondary structures such as G-quadruplexes (G4s) can form within DNA or RNA. They pose a dramatic risk for genome stability, because due to their stability they can block DNA replication and this could lead to DNA breaks. In certain cancer cells mutations/deletions are observed at G4s, if a helicase that is important for G4 unwinding is mutated. Nevertheless, G4s are also discussed to be functional elements for cellular processes such as telomere protection, transcription, replication, and meiosis. The aim of this research proposal is to use various biochemical and computational tools to determine which proteins are essential for formation and regulation of G4s. Proposed experiments will gain insights into both “effects” of G4s, the risk for genome stability and its significant function for the cell. In aim 1 we will elucidate and identify novel proteins that bind, regulate, and repair G4s, especially in the absence of helicases, in vitro and in vivo. Our focus is to understand how G4s become mutated in the absence of helicases, which proteins are involved, and how genome stability is preserved. In aim 2, we will use cutting edge techniques to identify regions that form G4s in vivo. Although there is experimental proof for G4s in vivo, this is not commonly accepted, yet. We will provide solid data that will support the existence of G4s in vivo. Furthermore, we will survey genome-wide when and why G4s become a risk for genome stability. Aim 3 will focus on the in silico observation that G4 structures are connected to meiosis. In this aim we will use a combination of techniques to unravel the biological significance of G4s during meiosis in vivo.
Due to the connection of G4s and cancer the data obtained from this research proposal will not only be important to understand G4 regulation and formation, but will also provide unique knowledge on the impact of G4 structures for genome stability and thereby for human health.
Max ERC Funding
1 531 625 €
Duration
Start date: 2015-07-01, End date: 2021-02-28
Project acronym IDIU
Project Integrated and Detailed Image Understanding
Researcher (PI) Andrea Vedaldi
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary The aim of this project is to create the technology needed to understand the content of images in a detailed, human-like manner, significantly superseding the current limitations of automatic image understanding, and enabling new far reaching human-centric applications. The first goal is to substantially broaden the spectrum of visual information that machines can extract from images. For example, where current technology may discover that there is a ``person'' in an image, we would like to produce a description such as ``person wearing a red uniform, tall, brown haired, with a bayonet, and a long black hat.'' The second goal is to do so efficiently, by developing integrated image representations that can share knowledge and computation in multiple computer vision tasks, from detecting edges to recognising and describing thousands of different object types.
In order to do so, we will investigate, for the fist time in a systematic manner, the breadth of information that humans can extract from images, from abstract patterns to object parts and attributes, and we will incorporate it in the next generation of machine vision systems. Compared to existing technology, the new algorithms will have a significantly richer and more detailed understanding of the content of images. They will be learned from data building on recent breakthroughs in large scale discriminative and deep machine learning, and will be delivered as general-purpose open-source software for the benefit of the research community and businesses. In order to make these systems future-proof, we will develop methods to extend them automatically, by learning from images downloaded from the Internet with very little human supervision. These new advanced capabilities will be demonstrated in breakthrough applications in large scale image search and visual information retrieval.
Summary
The aim of this project is to create the technology needed to understand the content of images in a detailed, human-like manner, significantly superseding the current limitations of automatic image understanding, and enabling new far reaching human-centric applications. The first goal is to substantially broaden the spectrum of visual information that machines can extract from images. For example, where current technology may discover that there is a ``person'' in an image, we would like to produce a description such as ``person wearing a red uniform, tall, brown haired, with a bayonet, and a long black hat.'' The second goal is to do so efficiently, by developing integrated image representations that can share knowledge and computation in multiple computer vision tasks, from detecting edges to recognising and describing thousands of different object types.
In order to do so, we will investigate, for the fist time in a systematic manner, the breadth of information that humans can extract from images, from abstract patterns to object parts and attributes, and we will incorporate it in the next generation of machine vision systems. Compared to existing technology, the new algorithms will have a significantly richer and more detailed understanding of the content of images. They will be learned from data building on recent breakthroughs in large scale discriminative and deep machine learning, and will be delivered as general-purpose open-source software for the benefit of the research community and businesses. In order to make these systems future-proof, we will develop methods to extend them automatically, by learning from images downloaded from the Internet with very little human supervision. These new advanced capabilities will be demonstrated in breakthrough applications in large scale image search and visual information retrieval.
Max ERC Funding
1 497 271 €
Duration
Start date: 2015-08-01, End date: 2020-07-31