Project acronym 5D Heart Patch
Project A Functional, Mature In vivo Human Ventricular Muscle Patch for Cardiomyopathy
Researcher (PI) Kenneth Randall Chien
Host Institution (HI) KAROLINSKA INSTITUTET
Call Details Advanced Grant (AdG), LS7, ERC-2016-ADG
Summary Developing new therapeutic strategies for heart regeneration is a major goal for cardiac biology and medicine. While cardiomyocytes can be generated from human pluripotent stem (hPSC) cells in vitro, it has proven difficult to use these cells to generate a large scale, mature human heart ventricular muscle graft on the injured heart in vivo. The central objective of this proposal is to optimize the generation of a large-scale pure, fully functional human ventricular muscle patch in vivo through the self-assembly of purified human ventricular progenitors and the localized expression of defined paracrine factors that drive their expansion, differentiation, vascularization, matrix formation, and maturation. Recently, we have found that purified hPSC-derived ventricular progenitors (HVPs) can self-assemble in vivo on the epicardial surface into a 3D vascularized, and functional ventricular patch with its own extracellular matrix via a cell autonomous pathway. A two-step protocol and FACS purification of HVP receptors can generate billions of pure HVPs- The current proposal will lead to the identification of defined paracrine pathways to enhance the survival, grafting/implantation, expansion, differentiation, matrix formation, vascularization and maturation of the graft in vivo. We will captalize on our unique HVP system and our novel modRNA technology to deliver therapeutic strategies by using the in vivo human ventricular muscle to model in vivo arrhythmogenic cardiomyopathy, and optimize the ability of the graft to compensate for the massive loss of functional muscle during ischemic cardiomyopathy and post-myocardial infarction. The studies will lead to new in vivo chimeric models of human cardiac disease and an experimental paradigm to optimize organ-on-organ cardiac tissue engineers of an in vivo, functional mature ventricular patch for cardiomyopathy
Summary
Developing new therapeutic strategies for heart regeneration is a major goal for cardiac biology and medicine. While cardiomyocytes can be generated from human pluripotent stem (hPSC) cells in vitro, it has proven difficult to use these cells to generate a large scale, mature human heart ventricular muscle graft on the injured heart in vivo. The central objective of this proposal is to optimize the generation of a large-scale pure, fully functional human ventricular muscle patch in vivo through the self-assembly of purified human ventricular progenitors and the localized expression of defined paracrine factors that drive their expansion, differentiation, vascularization, matrix formation, and maturation. Recently, we have found that purified hPSC-derived ventricular progenitors (HVPs) can self-assemble in vivo on the epicardial surface into a 3D vascularized, and functional ventricular patch with its own extracellular matrix via a cell autonomous pathway. A two-step protocol and FACS purification of HVP receptors can generate billions of pure HVPs- The current proposal will lead to the identification of defined paracrine pathways to enhance the survival, grafting/implantation, expansion, differentiation, matrix formation, vascularization and maturation of the graft in vivo. We will captalize on our unique HVP system and our novel modRNA technology to deliver therapeutic strategies by using the in vivo human ventricular muscle to model in vivo arrhythmogenic cardiomyopathy, and optimize the ability of the graft to compensate for the massive loss of functional muscle during ischemic cardiomyopathy and post-myocardial infarction. The studies will lead to new in vivo chimeric models of human cardiac disease and an experimental paradigm to optimize organ-on-organ cardiac tissue engineers of an in vivo, functional mature ventricular patch for cardiomyopathy
Max ERC Funding
2 149 228 €
Duration
Start date: 2017-12-01, End date: 2022-11-30
Project acronym ALEXANDRIA
Project Large-Scale Formal Proof for the Working Mathematician
Researcher (PI) Lawrence PAULSON
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Mathematical proofs have always been prone to error. Today, proofs can be hundreds of pages long and combine results from many specialisms, making them almost impossible to check. One solution is to deploy modern verification technology. Interactive theorem provers have demonstrated their potential as vehicles for formalising mathematics through achievements such as the verification of the Kepler Conjecture. Proofs done using such tools reach a high standard of correctness.
However, existing theorem provers are unsuitable for mathematics. Their formal proofs are unreadable. They struggle to do simple tasks, such as evaluating limits. They lack much basic mathematics, and the material they do have is difficult to locate and apply.
ALEXANDRIA will create a proof development environment attractive to working mathematicians, utilising the best technology available across computer science. Its focus will be the management and use of large-scale mathematical knowledge, both theorems and algorithms. The project will employ mathematicians to investigate the formalisation of mathematics in practice. Our already substantial formalised libraries will serve as the starting point. They will be extended and annotated to support sophisticated searches. Techniques will be borrowed from machine learning, information retrieval and natural language processing. Algorithms will be treated similarly: ALEXANDRIA will help users find and invoke the proof methods and algorithms appropriate for the task.
ALEXANDRIA will provide (1) comprehensive formal mathematical libraries; (2) search within libraries, and the mining of libraries for proof patterns; (3) automated support for the construction of large formal proofs; (4) sound and practical computer algebra tools.
ALEXANDRIA will be based on legible structured proofs. Formal proofs should be not mere code, but a machine-checkable form of communication between mathematicians.
Summary
Mathematical proofs have always been prone to error. Today, proofs can be hundreds of pages long and combine results from many specialisms, making them almost impossible to check. One solution is to deploy modern verification technology. Interactive theorem provers have demonstrated their potential as vehicles for formalising mathematics through achievements such as the verification of the Kepler Conjecture. Proofs done using such tools reach a high standard of correctness.
However, existing theorem provers are unsuitable for mathematics. Their formal proofs are unreadable. They struggle to do simple tasks, such as evaluating limits. They lack much basic mathematics, and the material they do have is difficult to locate and apply.
ALEXANDRIA will create a proof development environment attractive to working mathematicians, utilising the best technology available across computer science. Its focus will be the management and use of large-scale mathematical knowledge, both theorems and algorithms. The project will employ mathematicians to investigate the formalisation of mathematics in practice. Our already substantial formalised libraries will serve as the starting point. They will be extended and annotated to support sophisticated searches. Techniques will be borrowed from machine learning, information retrieval and natural language processing. Algorithms will be treated similarly: ALEXANDRIA will help users find and invoke the proof methods and algorithms appropriate for the task.
ALEXANDRIA will provide (1) comprehensive formal mathematical libraries; (2) search within libraries, and the mining of libraries for proof patterns; (3) automated support for the construction of large formal proofs; (4) sound and practical computer algebra tools.
ALEXANDRIA will be based on legible structured proofs. Formal proofs should be not mere code, but a machine-checkable form of communication between mathematicians.
Max ERC Funding
2 430 140 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym AlgoRNN
Project Recurrent Neural Networks and Related Machines That Learn Algorithms
Researcher (PI) Juergen Schmidhuber
Host Institution (HI) UNIVERSITA DELLA SVIZZERA ITALIANA
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Recurrent neural networks (RNNs) are general parallel-sequential computers. Some learn their programs or weights. Our supervised Long Short-Term Memory (LSTM) RNNs were the first to win pattern recognition contests, and recently enabled best known results in speech and handwriting recognition, machine translation, etc. They are now available to billions of users through the world's most valuable public companies including Google and Apple. Nevertheless, in lots of real-world tasks RNNs do not yet live up to their full potential. Although universal in theory, in practice they fail to learn important types of algorithms. This ERC project will go far beyond today's best RNNs through novel RNN-like systems that address some of the biggest open RNN problems and hottest RNN research topics: (1) How can RNNs learn to control (through internal spotlights of attention) separate large short-memory structures such as sub-networks with fast weights, to improve performance on many natural short-term memory-intensive tasks which are currently hard to learn by RNNs, such as answering detailed questions on recently observed videos? (2) How can such RNN-like systems metalearn entire learning algorithms that outperform the original learning algorithms? (3) How to achieve efficient transfer learning from one RNN-learned set of problem-solving programs to new RNN programs solving new tasks? In other words, how can one RNN-like system actively learn to exploit algorithmic information contained in the programs running on another? We will test our systems existing benchmarks, and create new, more challenging multi-task benchmarks. This will be supported by a rather cheap, GPU-based mini-brain for implementing large RNNs.
Summary
Recurrent neural networks (RNNs) are general parallel-sequential computers. Some learn their programs or weights. Our supervised Long Short-Term Memory (LSTM) RNNs were the first to win pattern recognition contests, and recently enabled best known results in speech and handwriting recognition, machine translation, etc. They are now available to billions of users through the world's most valuable public companies including Google and Apple. Nevertheless, in lots of real-world tasks RNNs do not yet live up to their full potential. Although universal in theory, in practice they fail to learn important types of algorithms. This ERC project will go far beyond today's best RNNs through novel RNN-like systems that address some of the biggest open RNN problems and hottest RNN research topics: (1) How can RNNs learn to control (through internal spotlights of attention) separate large short-memory structures such as sub-networks with fast weights, to improve performance on many natural short-term memory-intensive tasks which are currently hard to learn by RNNs, such as answering detailed questions on recently observed videos? (2) How can such RNN-like systems metalearn entire learning algorithms that outperform the original learning algorithms? (3) How to achieve efficient transfer learning from one RNN-learned set of problem-solving programs to new RNN programs solving new tasks? In other words, how can one RNN-like system actively learn to exploit algorithmic information contained in the programs running on another? We will test our systems existing benchmarks, and create new, more challenging multi-task benchmarks. This will be supported by a rather cheap, GPU-based mini-brain for implementing large RNNs.
Max ERC Funding
2 500 000 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym ALGSTRONGCRYPTO
Project Algebraic Methods for Stronger Crypto
Researcher (PI) Ronald John Fitzgerald CRAMER
Host Institution (HI) STICHTING NEDERLANDSE WETENSCHAPPELIJK ONDERZOEK INSTITUTEN
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Our field is cryptology. Our overarching objective is to advance significantly the frontiers in
design and analysis of high-security cryptography for the future generation.
Particularly, we wish to enhance the efficiency, functionality, and, last-but-not-least, fundamental understanding of cryptographic security against very powerful adversaries.
Our approach here is to develop completely novel methods by
deepening, strengthening and broadening the
algebraic foundations of the field.
Concretely, our lens builds on
the arithmetic codex. This is a general, abstract cryptographic primitive whose basic theory we recently developed and whose asymptotic part, which relies on algebraic geometry, enjoys crucial applications in surprising foundational results on constant communication-rate two-party cryptography. A codex is a linear (error correcting) code that, when endowing its ambient vector space just with coordinate-wise multiplication, can be viewed as simulating, up to some degree, richer arithmetical structures such as finite fields (or products thereof), or generally, finite-dimensional algebras over finite fields. Besides this degree, coordinate-localities for which simulation holds and for which it does not at all are also captured.
Our method is based on novel perspectives on codices which significantly
widen their scope and strengthen their utility. Particularly, we bring
symmetries, computational- and complexity theoretic aspects, and connections with algebraic number theory, -geometry, and -combinatorics into play in novel ways. Our applications range from public-key cryptography to secure multi-party computation.
Our proposal is subdivided into 3 interconnected modules:
(1) Algebraic- and Number Theoretical Cryptanalysis
(2) Construction of Algebraic Crypto Primitives
(3) Advanced Theory of Arithmetic Codices
Summary
Our field is cryptology. Our overarching objective is to advance significantly the frontiers in
design and analysis of high-security cryptography for the future generation.
Particularly, we wish to enhance the efficiency, functionality, and, last-but-not-least, fundamental understanding of cryptographic security against very powerful adversaries.
Our approach here is to develop completely novel methods by
deepening, strengthening and broadening the
algebraic foundations of the field.
Concretely, our lens builds on
the arithmetic codex. This is a general, abstract cryptographic primitive whose basic theory we recently developed and whose asymptotic part, which relies on algebraic geometry, enjoys crucial applications in surprising foundational results on constant communication-rate two-party cryptography. A codex is a linear (error correcting) code that, when endowing its ambient vector space just with coordinate-wise multiplication, can be viewed as simulating, up to some degree, richer arithmetical structures such as finite fields (or products thereof), or generally, finite-dimensional algebras over finite fields. Besides this degree, coordinate-localities for which simulation holds and for which it does not at all are also captured.
Our method is based on novel perspectives on codices which significantly
widen their scope and strengthen their utility. Particularly, we bring
symmetries, computational- and complexity theoretic aspects, and connections with algebraic number theory, -geometry, and -combinatorics into play in novel ways. Our applications range from public-key cryptography to secure multi-party computation.
Our proposal is subdivided into 3 interconnected modules:
(1) Algebraic- and Number Theoretical Cryptanalysis
(2) Construction of Algebraic Crypto Primitives
(3) Advanced Theory of Arithmetic Codices
Max ERC Funding
2 447 439 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym ANGIOPLACE
Project Expression and Methylation Status of Genes Regulating Placental Angiogenesis in Normal, Cloned, IVF and Monoparental Sheep Foetuses
Researcher (PI) Grazyna Ewa Ptak
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TERAMO
Call Details Starting Grant (StG), LS7, ERC-2007-StG
Summary Normal placental angiogenesis is critical for embryonic survival and development. Epigenetic modifications, such as methylation of CpG islands, regulate the expression and imprinting of genes. Epigenetic abnormalities have been observed in embryos from assisted reproductive technologies (ART), which could explain the poor placental vascularisation, embryonic/fetal death, and altered fetal growth in these pregnancies. Both cloned (somatic cell nuclear transfer, or SNCT) and monoparental (parthogenotes, only maternal genes; androgenotes, only paternal genes) embryos provide important models for studying defects in expression and methylation status/imprinting of genes regulating placental function. Our hypothesis is that placental vascular development is compromised during early pregnancy in embryos from ART, in part due to altered expression or imprinting/methylation status of specific genes regulating placental angiogenesis. We will evaluate fetal growth, placental vascular growth, and expression and epigenetic status of genes regulating placental angiogenesis during early pregnancy in 3 Specific Aims: (1) after natural mating; (2) after transfer of biparental embryos from in vitro fertilization, and SCNT; and (3) after transfer of parthenogenetic or androgenetic embryos. These studies will therefore contribute substantially to our understanding of the regulation of placental development and vascularisation during early pregnancy, and could pinpoint the mechanism contributing to embryonic loss and developmental abnormalities in foetuses from ART. Any or all of these observations will contribute to our understanding of and also our ability to successfully employ ART, which are becoming very wide spread and important in human medicine as well as in animal production.
Summary
Normal placental angiogenesis is critical for embryonic survival and development. Epigenetic modifications, such as methylation of CpG islands, regulate the expression and imprinting of genes. Epigenetic abnormalities have been observed in embryos from assisted reproductive technologies (ART), which could explain the poor placental vascularisation, embryonic/fetal death, and altered fetal growth in these pregnancies. Both cloned (somatic cell nuclear transfer, or SNCT) and monoparental (parthogenotes, only maternal genes; androgenotes, only paternal genes) embryos provide important models for studying defects in expression and methylation status/imprinting of genes regulating placental function. Our hypothesis is that placental vascular development is compromised during early pregnancy in embryos from ART, in part due to altered expression or imprinting/methylation status of specific genes regulating placental angiogenesis. We will evaluate fetal growth, placental vascular growth, and expression and epigenetic status of genes regulating placental angiogenesis during early pregnancy in 3 Specific Aims: (1) after natural mating; (2) after transfer of biparental embryos from in vitro fertilization, and SCNT; and (3) after transfer of parthenogenetic or androgenetic embryos. These studies will therefore contribute substantially to our understanding of the regulation of placental development and vascularisation during early pregnancy, and could pinpoint the mechanism contributing to embryonic loss and developmental abnormalities in foetuses from ART. Any or all of these observations will contribute to our understanding of and also our ability to successfully employ ART, which are becoming very wide spread and important in human medicine as well as in animal production.
Max ERC Funding
363 600 €
Duration
Start date: 2008-10-01, End date: 2012-06-30
Project acronym APPROXNP
Project Approximation of NP-hard optimization problems
Researcher (PI) Johan Håstad
Host Institution (HI) KUNGLIGA TEKNISKA HOEGSKOLAN
Call Details Advanced Grant (AdG), PE6, ERC-2008-AdG
Summary The proposed project aims to create a center of excellence that aims at understanding the approximability of NP-hard optimization problems. In particular, for central problems like vertex cover, coloring of graphs, and various constraint satisfaction problems we want to study upper and lower bounds on how well they can be approximated in polynomial time. Many existing strong results are based on what is known as the Unique Games Conjecture (UGC) and a significant part of the project will be devoted to studying this conjecture. We expect that a major step needed to be taken in this process is to further develop the understanding of Boolean functions on the Boolean hypercube. We anticipate that the tools needed for this will come in the form of harmonic analysis which in its turn will rely on the corresponding results in the analysis of functions over the domain of real numbers.
Summary
The proposed project aims to create a center of excellence that aims at understanding the approximability of NP-hard optimization problems. In particular, for central problems like vertex cover, coloring of graphs, and various constraint satisfaction problems we want to study upper and lower bounds on how well they can be approximated in polynomial time. Many existing strong results are based on what is known as the Unique Games Conjecture (UGC) and a significant part of the project will be devoted to studying this conjecture. We expect that a major step needed to be taken in this process is to further develop the understanding of Boolean functions on the Boolean hypercube. We anticipate that the tools needed for this will come in the form of harmonic analysis which in its turn will rely on the corresponding results in the analysis of functions over the domain of real numbers.
Max ERC Funding
2 376 000 €
Duration
Start date: 2009-01-01, End date: 2014-12-31
Project acronym AUTOCOMPLEMENT
Project The role of complement in the induction of autoimmunity against post-translationally modified proteins
Researcher (PI) Leendert TROUW
Host Institution (HI) ACADEMISCH ZIEKENHUIS LEIDEN
Call Details Consolidator Grant (CoG), LS7, ERC-2016-COG
Summary In many prevalent autoimmune diseases such as rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE) autoantibodies are used as diagnostic and prognostic tools. Several of these autoantibodies target proteins that have been post-translationally modified (PTM). Examples of such modifications are citrullination and carbamylation. The success of B cell-targeted therapies in many auto-antibody positive diseases suggests that B cell mediated auto-immunity is playing a direct pathogenic role. Despite the wealth of information on the clinical associations of these anti-PTM protein antibodies as biomarkers we have currently no insight into why these antibodies are formed.
Immunization studies reveal that PTM proteins can induce antibody responses even in the absence of exogenous adjuvant. The reason why these PTM proteins have ‘autoadjuvant’ properties that lead to a breach of tolerance is currently unknown. In this proposal, I hypothesise that the breach of tolerance towards PTM proteins is mediated by complement factors that bind directly to these PTM. Our preliminary data indeed reveal that several complement factors bind specifically to PTM proteins. Complement could be involved in the autoadjuvant property of PTM proteins as next to killing pathogens complement can also boost adaptive immune responses. I plan to unravel the importance of the complement–PTM protein interaction by answering these questions:
1) What is the physiological function of complement binding to PTM proteins?
2) Is the breach of tolerance towards PTM proteins influenced by complement?
3) Can the adjuvant function of PTM be used to increase vaccine efficacy and/or decrease autoreactivity?
With AUTOCOMPLEMENT I will elucidate how PTM-reactive B cells receive ‘autoadjuvant’ signals. This insight will impact on patient care as we can now design strategies to either block unwanted ‘autoadjuvant’ signals to inhibit autoimmunity or to utilize ‘autoadjuvant’ signals to potentiate vaccination.
Summary
In many prevalent autoimmune diseases such as rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE) autoantibodies are used as diagnostic and prognostic tools. Several of these autoantibodies target proteins that have been post-translationally modified (PTM). Examples of such modifications are citrullination and carbamylation. The success of B cell-targeted therapies in many auto-antibody positive diseases suggests that B cell mediated auto-immunity is playing a direct pathogenic role. Despite the wealth of information on the clinical associations of these anti-PTM protein antibodies as biomarkers we have currently no insight into why these antibodies are formed.
Immunization studies reveal that PTM proteins can induce antibody responses even in the absence of exogenous adjuvant. The reason why these PTM proteins have ‘autoadjuvant’ properties that lead to a breach of tolerance is currently unknown. In this proposal, I hypothesise that the breach of tolerance towards PTM proteins is mediated by complement factors that bind directly to these PTM. Our preliminary data indeed reveal that several complement factors bind specifically to PTM proteins. Complement could be involved in the autoadjuvant property of PTM proteins as next to killing pathogens complement can also boost adaptive immune responses. I plan to unravel the importance of the complement–PTM protein interaction by answering these questions:
1) What is the physiological function of complement binding to PTM proteins?
2) Is the breach of tolerance towards PTM proteins influenced by complement?
3) Can the adjuvant function of PTM be used to increase vaccine efficacy and/or decrease autoreactivity?
With AUTOCOMPLEMENT I will elucidate how PTM-reactive B cells receive ‘autoadjuvant’ signals. This insight will impact on patient care as we can now design strategies to either block unwanted ‘autoadjuvant’ signals to inhibit autoimmunity or to utilize ‘autoadjuvant’ signals to potentiate vaccination.
Max ERC Funding
1 999 803 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym BEAT
Project The functional interaction of EGFR and beta-catenin signalling in colorectal cancer: Genetics, mechanisms, and therapeutic potential.
Researcher (PI) Andrea BERTOTTI
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TORINO
Call Details Consolidator Grant (CoG), LS7, ERC-2016-COG
Summary Monoclonal antibodies against the EGF receptor (EGFR) provide substantive benefit to colorectal cancer (CRC) patients. However, no genetic lesions that robustly predict ‘addiction’ to the EGFR pathway have been yet identified. Further, even in tumours that regress after EGFR blockade, subsets of drug-tolerant cells often linger and foster ‘minimal residual disease’ (MRD), which portends tumour relapse.
Our preliminary evidence suggests that reliance on EGFR activity, as opposed to MRD persistence, could be assisted by genetically-based variations in transcription factor partnerships and activities, gene expression outputs, and biological fates controlled by the WNT/beta-catenin pathway. On such premises, BEAT (Beta-catenin and EGFR Abrogation Therapy) will elucidate the mechanisms of EGFR dependency, and escape from it, with the goal to identify biomarkers for more efficient clinical management of CRC and develop new therapies for MRD eradication.
A multidisciplinary approach will be pursued spanning from integrative gene regulation analyses to functional genomics in vitro, pharmacological experiments in vivo, and clinical investigation, to address whether: (i) specific genetic alterations of the WNT pathway affect anti-EGFR sensitivity; (ii) combined neutralisation of EGFR and WNT signals fuels MRD deterioration; (iii) data from analysis of this synergy can lead to the discovery of clinically meaningful biomarkers with predictive and prognostic significance.
This proposal capitalises on a unique proprietary platform for high-content studies based on a large biobank of viable CRC samples, which ensures strong analytical power and unprecedented biological flexibility. By providing fresh insight into the mechanisms whereby WNT/beta-catenin signalling differentially sustains EGFR dependency or drug tolerance, the project is expected to put forward an innovative reinterpretation of CRC molecular bases and advance the rational application of more effective therapies.
Summary
Monoclonal antibodies against the EGF receptor (EGFR) provide substantive benefit to colorectal cancer (CRC) patients. However, no genetic lesions that robustly predict ‘addiction’ to the EGFR pathway have been yet identified. Further, even in tumours that regress after EGFR blockade, subsets of drug-tolerant cells often linger and foster ‘minimal residual disease’ (MRD), which portends tumour relapse.
Our preliminary evidence suggests that reliance on EGFR activity, as opposed to MRD persistence, could be assisted by genetically-based variations in transcription factor partnerships and activities, gene expression outputs, and biological fates controlled by the WNT/beta-catenin pathway. On such premises, BEAT (Beta-catenin and EGFR Abrogation Therapy) will elucidate the mechanisms of EGFR dependency, and escape from it, with the goal to identify biomarkers for more efficient clinical management of CRC and develop new therapies for MRD eradication.
A multidisciplinary approach will be pursued spanning from integrative gene regulation analyses to functional genomics in vitro, pharmacological experiments in vivo, and clinical investigation, to address whether: (i) specific genetic alterations of the WNT pathway affect anti-EGFR sensitivity; (ii) combined neutralisation of EGFR and WNT signals fuels MRD deterioration; (iii) data from analysis of this synergy can lead to the discovery of clinically meaningful biomarkers with predictive and prognostic significance.
This proposal capitalises on a unique proprietary platform for high-content studies based on a large biobank of viable CRC samples, which ensures strong analytical power and unprecedented biological flexibility. By providing fresh insight into the mechanisms whereby WNT/beta-catenin signalling differentially sustains EGFR dependency or drug tolerance, the project is expected to put forward an innovative reinterpretation of CRC molecular bases and advance the rational application of more effective therapies.
Max ERC Funding
1 793 421 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym BeyondBlackbox
Project Data-Driven Methods for Modelling and Optimizing the Empirical Performance of Deep Neural Networks
Researcher (PI) Frank Roman HUTTER
Host Institution (HI) ALBERT-LUDWIGS-UNIVERSITAET FREIBURG
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary Deep neural networks (DNNs) have led to dramatic improvements of the state-of-the-art for many important classification problems, such as object recognition from images or speech recognition from audio data. However, DNNs are also notoriously dependent on the tuning of their hyperparameters. Since their manual tuning is time-consuming and requires expert knowledge, recent years have seen the rise of Bayesian optimization methods for automating this task. While these methods have had substantial successes, their treatment of DNN performance as a black box poses fundamental limitations, allowing manual tuning to be more effective for large and computationally expensive data sets: humans can (1) exploit prior knowledge and extrapolate performance from data subsets, (2) monitor the DNN's internal weight optimization by stochastic gradient descent over time, and (3) reactively change hyperparameters at runtime. We therefore propose to model DNN performance beyond a blackbox level and to use these models to develop for the first time:
1. Next-generation Bayesian optimization methods that exploit data-driven priors to optimize performance orders of magnitude faster than currently possible;
2. Graybox Bayesian optimization methods that have access to -- and exploit -- performance and state information of algorithm runs over time; and
3. Hyperparameter control strategies that learn across different datasets to adapt hyperparameters reactively to the characteristics of any given situation.
DNNs play into our project in two ways. First, in all our methods we will use (Bayesian) DNNs to model and exploit the large amounts of performance data we will collect on various datasets. Second, our application goal is to optimize and control DNN hyperparameters far better than human experts and to obtain:
4. Computationally inexpensive auto-tuned deep neural networks, even for large datasets, enabling the widespread use of deep learning by non-experts.
Summary
Deep neural networks (DNNs) have led to dramatic improvements of the state-of-the-art for many important classification problems, such as object recognition from images or speech recognition from audio data. However, DNNs are also notoriously dependent on the tuning of their hyperparameters. Since their manual tuning is time-consuming and requires expert knowledge, recent years have seen the rise of Bayesian optimization methods for automating this task. While these methods have had substantial successes, their treatment of DNN performance as a black box poses fundamental limitations, allowing manual tuning to be more effective for large and computationally expensive data sets: humans can (1) exploit prior knowledge and extrapolate performance from data subsets, (2) monitor the DNN's internal weight optimization by stochastic gradient descent over time, and (3) reactively change hyperparameters at runtime. We therefore propose to model DNN performance beyond a blackbox level and to use these models to develop for the first time:
1. Next-generation Bayesian optimization methods that exploit data-driven priors to optimize performance orders of magnitude faster than currently possible;
2. Graybox Bayesian optimization methods that have access to -- and exploit -- performance and state information of algorithm runs over time; and
3. Hyperparameter control strategies that learn across different datasets to adapt hyperparameters reactively to the characteristics of any given situation.
DNNs play into our project in two ways. First, in all our methods we will use (Bayesian) DNNs to model and exploit the large amounts of performance data we will collect on various datasets. Second, our application goal is to optimize and control DNN hyperparameters far better than human experts and to obtain:
4. Computationally inexpensive auto-tuned deep neural networks, even for large datasets, enabling the widespread use of deep learning by non-experts.
Max ERC Funding
1 495 000 €
Duration
Start date: 2017-01-01, End date: 2021-12-31
Project acronym BigFastData
Project Charting a New Horizon of Big and Fast Data Analysis through Integrated Algorithm Design
Researcher (PI) Yanlei DIAO
Host Institution (HI) ECOLE POLYTECHNIQUE
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary This proposal addresses a pressing need from emerging big data applications such as genomics and data center monitoring: besides the scale of processing, big data systems must also enable perpetual, low-latency processing for a broad set of analytical tasks, referred to as big and fast data analysis. Today’s technology falls severely short for such needs due to the lack of support of complex analytics with scale, low latency, and strong guarantees of user performance requirements. To bridge the gap, this proposal tackles a grand challenge: “How do we design an algorithmic foundation that enables the development of all necessary pillars of big and fast data analysis?” This proposal considers three pillars:
1) Parallelism: There is a fundamental tension between data parallelism (for scale) and pipeline parallelism (for low latency). We propose new approaches based on intelligent use of memory and workload properties to integrate both forms of parallelism.
2) Analytics: The literature lacks a large body of algorithms for critical order-related analytics to be run under data and pipeline parallelism. We propose new algorithmic frameworks to enable such analytics.
3) Optimization: To run analytics, today's big data systems are best effort only. We transform such systems into a principled optimization framework that suits the new characteristics of big data infrastructure and adapts to meet user performance requirements.
The scale and complexity of the proposed algorithm design makes this project high-risk, at the same time, high-gain: it will lay a solid foundation for big and fast data analysis, enabling a new integrated parallel processing paradigm, algorithms for critical order-related analytics, and a principled optimizer with strong performance guarantees. It will also broadly enable accelerated information discovery in emerging domains such as genomics, as well as economic benefits of early, well-informed decisions and reduced user payments.
Summary
This proposal addresses a pressing need from emerging big data applications such as genomics and data center monitoring: besides the scale of processing, big data systems must also enable perpetual, low-latency processing for a broad set of analytical tasks, referred to as big and fast data analysis. Today’s technology falls severely short for such needs due to the lack of support of complex analytics with scale, low latency, and strong guarantees of user performance requirements. To bridge the gap, this proposal tackles a grand challenge: “How do we design an algorithmic foundation that enables the development of all necessary pillars of big and fast data analysis?” This proposal considers three pillars:
1) Parallelism: There is a fundamental tension between data parallelism (for scale) and pipeline parallelism (for low latency). We propose new approaches based on intelligent use of memory and workload properties to integrate both forms of parallelism.
2) Analytics: The literature lacks a large body of algorithms for critical order-related analytics to be run under data and pipeline parallelism. We propose new algorithmic frameworks to enable such analytics.
3) Optimization: To run analytics, today's big data systems are best effort only. We transform such systems into a principled optimization framework that suits the new characteristics of big data infrastructure and adapts to meet user performance requirements.
The scale and complexity of the proposed algorithm design makes this project high-risk, at the same time, high-gain: it will lay a solid foundation for big and fast data analysis, enabling a new integrated parallel processing paradigm, algorithms for critical order-related analytics, and a principled optimizer with strong performance guarantees. It will also broadly enable accelerated information discovery in emerging domains such as genomics, as well as economic benefits of early, well-informed decisions and reduced user payments.
Max ERC Funding
2 472 752 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym Bio-ICD
Project Biological auto-detection and termination of heart rhythm disturbances
Researcher (PI) Daniël Antonie PIJNAPPELS
Host Institution (HI) ACADEMISCH ZIEKENHUIS LEIDEN
Call Details Starting Grant (StG), LS7, ERC-2016-STG
Summary Imagine a heart that could no longer suffer from life-threatening rhythm disturbances, and not because of pills or traumatizing electroshocks from an Implantable Cardioverter Defibrillator (ICD) device. Instead, this heart has become able to rapidly detect & terminate these malignant arrhythmias fully on its own, after gene transfer. In order to explore this novel concept of biological auto-detection & termination of arrhythmias, I will investigate how forced expression of particular engineered proteins could i) allow cardiac tissue to become a detector of arrhythmias through rapid sensing of acute physiological changes upon their initiation. And how after detection, ii) this cardiac tissue (now as effector), could terminate the arrhythmia by generating a painless electroshock through these proteins.
To this purpose, I will first explore the requirements for such detection & termination by studying arrhythmia initiation and termination in rat models of atrial & ventricular arrhythmias using optical probes and light-gated ion channels. These insights will guide computer-based screening of proteins to identify those properties allowing effective arrhythmia detection & termination. These data will be used for rational engineering of the proteins with the desired properties, followed by their forced expression in cardiac cells and slices to assess anti-arrhythmic potential & safety. Promising proteins will be expressed in whole hearts to study their anti-arrhythmic effects and mechanisms, after which the most effective ones will be studied in awake rats.
This unexplored concept of self-resetting an acutely disturbed physiological state by establishing a biological detector-effector system may yield unique insight into arrhythmia management. Hence, this could provide distinctively innovative therapeutic rationales in which a diseased organ begets its own remedy, e.g. a Biologically-Integrated Cardiac Defibrillator (Bio-ICD).
Summary
Imagine a heart that could no longer suffer from life-threatening rhythm disturbances, and not because of pills or traumatizing electroshocks from an Implantable Cardioverter Defibrillator (ICD) device. Instead, this heart has become able to rapidly detect & terminate these malignant arrhythmias fully on its own, after gene transfer. In order to explore this novel concept of biological auto-detection & termination of arrhythmias, I will investigate how forced expression of particular engineered proteins could i) allow cardiac tissue to become a detector of arrhythmias through rapid sensing of acute physiological changes upon their initiation. And how after detection, ii) this cardiac tissue (now as effector), could terminate the arrhythmia by generating a painless electroshock through these proteins.
To this purpose, I will first explore the requirements for such detection & termination by studying arrhythmia initiation and termination in rat models of atrial & ventricular arrhythmias using optical probes and light-gated ion channels. These insights will guide computer-based screening of proteins to identify those properties allowing effective arrhythmia detection & termination. These data will be used for rational engineering of the proteins with the desired properties, followed by their forced expression in cardiac cells and slices to assess anti-arrhythmic potential & safety. Promising proteins will be expressed in whole hearts to study their anti-arrhythmic effects and mechanisms, after which the most effective ones will be studied in awake rats.
This unexplored concept of self-resetting an acutely disturbed physiological state by establishing a biological detector-effector system may yield unique insight into arrhythmia management. Hence, this could provide distinctively innovative therapeutic rationales in which a diseased organ begets its own remedy, e.g. a Biologically-Integrated Cardiac Defibrillator (Bio-ICD).
Max ERC Funding
1 485 028 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym BIOCERENG
Project Bioceramics: Multiscale Engineering of Advanced Ceramics at the Biology Interface
Researcher (PI) Kurosch Rezwan
Host Institution (HI) UNIVERSITAET BREMEN
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary In the last decades, Materials Sciences and Life Sciences, two highly dynamically evolving and interdisciplinary research areas, have been influencing natural and engineering sciences significantly, creating new challenges and opportunities. A prime example for an increasing synergetic overlap of Materials and Life Sciences is provided by biomedical and bioengineering applications, which are of great academic, but also of steadily increasing societal and commercial interest. Bridging the traditional borders of disciplinary thinking in these areas has become one of today’s most challenging tasks for scientists. One group of key materials that are of great importance to biomedical engineering and bioengineering are advanced oxide and non-oxide ceramics with specific functionalities towards biological environments, so-called Bioceramics. The interplay at the interface of ceramics-protein-cells/bacteria is very complex and requires multiscale and interdisciplinary approaches. This expertise, that is under continuous development in my Bioceramics group, encompasses materials processing, shaping, surface functionalisation and cells/bacteria evaluation at the same time. The comprehensive research environment and expertise provides a unique opportunity to engineer materials/surfaces with immediate subsequent biological evaluation in order to achieve an extremely short development time. A centre of focus is the contribution of electrostatic and hydrophilic/hydrophobic interactions to the overall biocompatibility and -activity. The proposed research project includes four closely interrelated subprojects, addressing the following topics: “Interaction of surface functionalised ceramic particles with proteins”, “Cytotoxicity of functionalised oxide particles”, “Fabrication and testing of functionalised porous Al2O3 as filters for water cleaning and bioengineering applications” and “Novel functional scaffold composites for bone tissue engineering”.
Summary
In the last decades, Materials Sciences and Life Sciences, two highly dynamically evolving and interdisciplinary research areas, have been influencing natural and engineering sciences significantly, creating new challenges and opportunities. A prime example for an increasing synergetic overlap of Materials and Life Sciences is provided by biomedical and bioengineering applications, which are of great academic, but also of steadily increasing societal and commercial interest. Bridging the traditional borders of disciplinary thinking in these areas has become one of today’s most challenging tasks for scientists. One group of key materials that are of great importance to biomedical engineering and bioengineering are advanced oxide and non-oxide ceramics with specific functionalities towards biological environments, so-called Bioceramics. The interplay at the interface of ceramics-protein-cells/bacteria is very complex and requires multiscale and interdisciplinary approaches. This expertise, that is under continuous development in my Bioceramics group, encompasses materials processing, shaping, surface functionalisation and cells/bacteria evaluation at the same time. The comprehensive research environment and expertise provides a unique opportunity to engineer materials/surfaces with immediate subsequent biological evaluation in order to achieve an extremely short development time. A centre of focus is the contribution of electrostatic and hydrophilic/hydrophobic interactions to the overall biocompatibility and -activity. The proposed research project includes four closely interrelated subprojects, addressing the following topics: “Interaction of surface functionalised ceramic particles with proteins”, “Cytotoxicity of functionalised oxide particles”, “Fabrication and testing of functionalised porous Al2O3 as filters for water cleaning and bioengineering applications” and “Novel functional scaffold composites for bone tissue engineering”.
Max ERC Funding
1 536 120 €
Duration
Start date: 2009-01-01, End date: 2013-12-31
Project acronym BrainDrain
Project Translational implications of the discovery of brain-draining lymphatics
Researcher (PI) Kari ALITALO
Host Institution (HI) HELSINGIN YLIOPISTO
Call Details Advanced Grant (AdG), LS7, ERC-2016-ADG
Summary In 2010, 800 billion Euros was spent on brain diseases in Europe and the cost is expected to increase due to the aging population. – Here I propose to exploit our new discovery for research to alleviate this disease burden. In work selected by Nature Medicine among the top 10 ”Notable Advances” and by Science as one of the 10 ”Breakthroughs of the year” 2015, we discovered a meningeal lymphatic vascular system that serves brain homeostasis. We want to reassess current concepts about cerebrovascular dynamics, fluid drainage and cellular trafficking in physiological conditions, in Alzheimer’s disease mouse models and in human postmortem tissues. First, we will study the development and properties of meningeal lymphatics and how they are sustained during aging. We then want to analyse the clearance of macromolecules and protein aggregates in Alzheimer’s disease in mice that lack the newly discovered meningeal lymphatic drainage system. We will study if growth factor-mediated expansion of lymphatic vessels alleviates the parenchymal accumulation of neurotoxic amyloid beta and pathogenesis of Alzheimer’s disease and brain damage after traumatic brain injury. We will further analyse the role of lymphangiogenic growth factors and lymphatic vessels in brain solute clearance, immune cell trafficking and in a mouse model of multiple sclerosis. The meningeal lymphatics could be involved in a number of neurodegenerative and neuroinflammatory diseases of considerable human and socioeconomic burden. Several of our previous concepts have already been translated to clinical development and we aim to develop proof-of-principle therapeutic concepts in this project. I feel that we are just now in a unique position to advance frontline European translational biomedical research in this suddenly emerging field, which has received great attention worldwide.
Summary
In 2010, 800 billion Euros was spent on brain diseases in Europe and the cost is expected to increase due to the aging population. – Here I propose to exploit our new discovery for research to alleviate this disease burden. In work selected by Nature Medicine among the top 10 ”Notable Advances” and by Science as one of the 10 ”Breakthroughs of the year” 2015, we discovered a meningeal lymphatic vascular system that serves brain homeostasis. We want to reassess current concepts about cerebrovascular dynamics, fluid drainage and cellular trafficking in physiological conditions, in Alzheimer’s disease mouse models and in human postmortem tissues. First, we will study the development and properties of meningeal lymphatics and how they are sustained during aging. We then want to analyse the clearance of macromolecules and protein aggregates in Alzheimer’s disease in mice that lack the newly discovered meningeal lymphatic drainage system. We will study if growth factor-mediated expansion of lymphatic vessels alleviates the parenchymal accumulation of neurotoxic amyloid beta and pathogenesis of Alzheimer’s disease and brain damage after traumatic brain injury. We will further analyse the role of lymphangiogenic growth factors and lymphatic vessels in brain solute clearance, immune cell trafficking and in a mouse model of multiple sclerosis. The meningeal lymphatics could be involved in a number of neurodegenerative and neuroinflammatory diseases of considerable human and socioeconomic burden. Several of our previous concepts have already been translated to clinical development and we aim to develop proof-of-principle therapeutic concepts in this project. I feel that we are just now in a unique position to advance frontline European translational biomedical research in this suddenly emerging field, which has received great attention worldwide.
Max ERC Funding
2 420 429 €
Duration
Start date: 2017-08-01, End date: 2022-07-31
Project acronym BRCA-ERC
Project Understanding cancer development in BRCA 1/2 mutation carriers for improved Early detection and Risk Control
Researcher (PI) Martin WIDSCHWENDTER
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Advanced Grant (AdG), LS7, ERC-2016-ADG
Summary Recent evidence demonstrates that cancer is overtaking cardiovascular disease as the number one cause of mortality in Europe. This is largely due to the lack of preventative measures for common (e.g. breast) or highly fatal (e.g. ovarian) human cancers. Most cancers are multifactorial in origin. The core hypothesis of this research programme is that the extremely high risk of BRCA1/2 germline mutation carriers to develop breast and ovarian cancer is a net consequence of cell-autonomous (direct effect of BRCA mutation in cells at risk) and cell non-autonomous (produced in distant organs and affecting organs at risk) factors which both trigger epigenetic, cancer-initiating effects.
The project’s aims are centered around the principles of systems medicine and built on a large cohort of BRCA mutation carriers and controls who will be offered newly established cancer screening programmes. We will uncover how ‘cell non-autonomous’ factors work, provide detail on the epigenetic changes in at-risk tissues and investigate whether these changes are mechanistically linked to cancer, study whether we can neutralise this process and measure success in the organs at risk, and ideally in easy to access samples such as blood, buccal and cervical cells.
In my Department for Women’s Cancer we have assembled a powerful interdisciplinary team including computational biologists, functionalists, immunologists and clinician scientists linked to leading patient advocacy groups which is extremely well placed to lead this pioneering project to develop the fundamental understanding of cancer development in women with BRCA mutations. To reset the epigenome, re-establishing normal cell identity and consequently reducing cancer risk without the need for surgery and being able to monitor the efficacy using multicellular epigenetic outcome predictors will be a major scientific and medical breakthrough and possibly applicable to other chronic diseases.
Summary
Recent evidence demonstrates that cancer is overtaking cardiovascular disease as the number one cause of mortality in Europe. This is largely due to the lack of preventative measures for common (e.g. breast) or highly fatal (e.g. ovarian) human cancers. Most cancers are multifactorial in origin. The core hypothesis of this research programme is that the extremely high risk of BRCA1/2 germline mutation carriers to develop breast and ovarian cancer is a net consequence of cell-autonomous (direct effect of BRCA mutation in cells at risk) and cell non-autonomous (produced in distant organs and affecting organs at risk) factors which both trigger epigenetic, cancer-initiating effects.
The project’s aims are centered around the principles of systems medicine and built on a large cohort of BRCA mutation carriers and controls who will be offered newly established cancer screening programmes. We will uncover how ‘cell non-autonomous’ factors work, provide detail on the epigenetic changes in at-risk tissues and investigate whether these changes are mechanistically linked to cancer, study whether we can neutralise this process and measure success in the organs at risk, and ideally in easy to access samples such as blood, buccal and cervical cells.
In my Department for Women’s Cancer we have assembled a powerful interdisciplinary team including computational biologists, functionalists, immunologists and clinician scientists linked to leading patient advocacy groups which is extremely well placed to lead this pioneering project to develop the fundamental understanding of cancer development in women with BRCA mutations. To reset the epigenome, re-establishing normal cell identity and consequently reducing cancer risk without the need for surgery and being able to monitor the efficacy using multicellular epigenetic outcome predictors will be a major scientific and medical breakthrough and possibly applicable to other chronic diseases.
Max ERC Funding
2 497 841 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym CAPER/BREAST CANCE
Project CAPER in Invasive Breast Cancer
Researcher (PI) Michael Lisanti
Host Institution (HI) THE UNIVERSITY OF MANCHESTER
Call Details Advanced Grant (AdG), LS7, ERC-2008-AdG
Summary Breast cancer is a major cause of death in the United States and the Western World. Advanced medical technologies and therapeutic strategies are necessary for the successful detection, diagnosis, and treatment of breast cancer. Here, we propose to use novel technologies (tissue microarrays (TMA) and automated quantivative bioimaging (AQUA)) to identify new therapeutic and prognostic markers for human breast cancer. More specifically, we will study the activation status of a new signaling pathway which we have implicated in breast cancer pathogenesis, using both mouse animal models and cells in culture. For this purpose, we will study the association of CAPER expression with pre-malignant lesions and progression from pre-malignancy to full-blown breast cancer. We expect that this new molecular marker will allow us to improve diagnostic accuracy for individual patients, enhancing both the prognostic predictions as well as the prediction of drug responsiveness for a given patient.
Summary
Breast cancer is a major cause of death in the United States and the Western World. Advanced medical technologies and therapeutic strategies are necessary for the successful detection, diagnosis, and treatment of breast cancer. Here, we propose to use novel technologies (tissue microarrays (TMA) and automated quantivative bioimaging (AQUA)) to identify new therapeutic and prognostic markers for human breast cancer. More specifically, we will study the activation status of a new signaling pathway which we have implicated in breast cancer pathogenesis, using both mouse animal models and cells in culture. For this purpose, we will study the association of CAPER expression with pre-malignant lesions and progression from pre-malignancy to full-blown breast cancer. We expect that this new molecular marker will allow us to improve diagnostic accuracy for individual patients, enhancing both the prognostic predictions as well as the prediction of drug responsiveness for a given patient.
Max ERC Funding
1 500 000 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym CASCAde
Project Confidentiality-preserving Security Assurance
Researcher (PI) Thomas GROSS
Host Institution (HI) UNIVERSITY OF NEWCASTLE UPON TYNE
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary "This proposal aims to create a new generation of security assurance. It investigates whether one can certify an inter-connected dynamically changing system in such a way that one can prove its security properties without disclosing sensitive information about the system's blueprint.
This has several compelling advantages. First, the security of large-scale dynamically changing systems will be significantly improved. Second, we can prove properties of topologies, hosts and users who participate in transactions in one go, while keeping sensitive information confidential. Third, we can prove the integrity of graph data structures to others, while maintaining their their confidentiality. This will benefit EU governments and citizens through the increased security of critical systems.
The proposal pursues the main research hypothesis that usable confidentiality-preserving security assurance will trigger a paradigm shift in security and dependability. It will pursue this objective by the creation of new cryptographic techniques to certify and prove properties of graph data structures. A preliminary investigation in 2015 showed that graph signature schemes are indeed feasible. The essence of this solution can be traced back to my earlier research on highly efficient attribute encodings for anonymous credential schemes in 2008.
However, the invention of graph signature schemes only clears one obstacle in a long journey to create a new generation of security assurance systems. There are still many complex obstacles, first and foremost, assuring ""soundness"" in the sense that integrity proofs a verifier accepts translate to the state of the system at that time. The work program involves six WPs: 1) to develop graph signatures and new cryptographic primitives; 2) to establish cross-system soundness; 3) to handle scale and change; 4) to establish human trust and usability; 5) to create new architectures; and 6) to test prototypes in practice."
Summary
"This proposal aims to create a new generation of security assurance. It investigates whether one can certify an inter-connected dynamically changing system in such a way that one can prove its security properties without disclosing sensitive information about the system's blueprint.
This has several compelling advantages. First, the security of large-scale dynamically changing systems will be significantly improved. Second, we can prove properties of topologies, hosts and users who participate in transactions in one go, while keeping sensitive information confidential. Third, we can prove the integrity of graph data structures to others, while maintaining their their confidentiality. This will benefit EU governments and citizens through the increased security of critical systems.
The proposal pursues the main research hypothesis that usable confidentiality-preserving security assurance will trigger a paradigm shift in security and dependability. It will pursue this objective by the creation of new cryptographic techniques to certify and prove properties of graph data structures. A preliminary investigation in 2015 showed that graph signature schemes are indeed feasible. The essence of this solution can be traced back to my earlier research on highly efficient attribute encodings for anonymous credential schemes in 2008.
However, the invention of graph signature schemes only clears one obstacle in a long journey to create a new generation of security assurance systems. There are still many complex obstacles, first and foremost, assuring ""soundness"" in the sense that integrity proofs a verifier accepts translate to the state of the system at that time. The work program involves six WPs: 1) to develop graph signatures and new cryptographic primitives; 2) to establish cross-system soundness; 3) to handle scale and change; 4) to establish human trust and usability; 5) to create new architectures; and 6) to test prototypes in practice."
Max ERC Funding
1 485 643 €
Duration
Start date: 2017-11-01, End date: 2022-10-31
Project acronym CC-MEM
Project Coordination and Composability: The Keys to Efficient Memory System Design
Researcher (PI) David BLACK-SCHAFFER
Host Institution (HI) UPPSALA UNIVERSITET
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary Computer systems today are power limited. As a result, efficiency gains can be translated into performance. Over the past decade we have been so effective at making computation more efficient that we are now at the point where we spend as much energy moving data (from memory to cache to processor) as we do computing the results. And this trend is only becoming worse as we demand more bandwidth for more powerful processors. To improve performance we need to revisit the way we design memory systems from an energy-first perspective, both at the hardware level and by coordinating data movement between hardware and software.
CC-MEM will address memory system efficiency by redesigning low-level hardware and high-level hardware/software integration for energy efficiency. The key novelty is in developing a framework for creating efficient memory systems. This framework will enable researchers and designers to compose solutions to different memory system problems (through a shared exchange of metadata) and coordinate them towards high-level system efficiency goals (through a shared policy framework). Central to this framework is a bilateral exchange of metadata and policy between hardware and software components. This novel communication will open new challenges and opportunities for fine-grained optimizations, system-level efficiency metrics, and more effective divisions of responsibility between hardware and software components.
CC-MEM will change how researchers and designers approach memory system design from today’s ad hoc development of local solutions to one wherein disparate components can be integrated (composed) and driven (coordinated) by system-level metrics. As a result, we will be able to more intelligently manage data, leading to dramatically lower memory system energy and increased performance, and open new possibilities for hardware and software optimizations.
Summary
Computer systems today are power limited. As a result, efficiency gains can be translated into performance. Over the past decade we have been so effective at making computation more efficient that we are now at the point where we spend as much energy moving data (from memory to cache to processor) as we do computing the results. And this trend is only becoming worse as we demand more bandwidth for more powerful processors. To improve performance we need to revisit the way we design memory systems from an energy-first perspective, both at the hardware level and by coordinating data movement between hardware and software.
CC-MEM will address memory system efficiency by redesigning low-level hardware and high-level hardware/software integration for energy efficiency. The key novelty is in developing a framework for creating efficient memory systems. This framework will enable researchers and designers to compose solutions to different memory system problems (through a shared exchange of metadata) and coordinate them towards high-level system efficiency goals (through a shared policy framework). Central to this framework is a bilateral exchange of metadata and policy between hardware and software components. This novel communication will open new challenges and opportunities for fine-grained optimizations, system-level efficiency metrics, and more effective divisions of responsibility between hardware and software components.
CC-MEM will change how researchers and designers approach memory system design from today’s ad hoc development of local solutions to one wherein disparate components can be integrated (composed) and driven (coordinated) by system-level metrics. As a result, we will be able to more intelligently manage data, leading to dramatically lower memory system energy and increased performance, and open new possibilities for hardware and software optimizations.
Max ERC Funding
1 610 000 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym CELLNAIVETY
Project Deciphering the Molecular Foundations and Functional Competence of Alternative Human Naïve Pluripotent Stem Cells
Researcher (PI) Yaqub HANNA
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), LS7, ERC-2016-COG
Summary An important goal of stem cell therapy is to create “customized” cells that are genetically identical to the patient, which upon transplantation can restore damaged tissues. Such cells can be obtained by in vitro direct reprogramming of somatic cells into embryonic stem (ES)-like cells, termed induced pluripotent stem cells (iPSC). This approach also opens possibilities for modelling human diseases in vitro. However, major hurdles remain that restrain fulfilling conventional human iPSC/ESC potential, as they reside in an advanced primed pluripotent state. Such hurdles include limited differentiation capacity and functional variability. Further, in vitro iPSC based research platforms are simplistic and iPSC based “humanized” chimeric mouse models may be of great benefit.
The recent isolation of distinct and new “mouse-like” naive pluripotent states in humans that correspond to earlier embryonic developmental state(s), constitutes a paradigm shift and may alleviate limitations of conventional primed iPSCs/ESCs. Thus, our proposal aims at dissecting the human naïve pluripotent state(s) and to unveil pathways that facilitate their unique identity and flexible programming.
Specific goals: 1) Transcriptional and Epigenetic Design Principles of Human Naïve Pluripotency 2) Signalling Principles Governing Human Naïve Pluripotency Maintenance and Differentiation 3) Defining Functional Competence and Safety of Human Naïve Pluripotent Stem Cells in vitro 4) Novel human naïve iPSC based cross-species chimeric mice for studying human differentiation and disease modelling in vivo. These aims will be conducted by utilizing engineered human iPSC/ESC models, CRISPR/Cas9 genome-wide screening, advanced microscopy and ex-vivo whole embryo culture methods. Our goals will synergistically lead to the design of strategies that will accelerate the safe medical application of human naive pluripotent stem cells and their use in disease specific modelling and applied stem cell research.
Summary
An important goal of stem cell therapy is to create “customized” cells that are genetically identical to the patient, which upon transplantation can restore damaged tissues. Such cells can be obtained by in vitro direct reprogramming of somatic cells into embryonic stem (ES)-like cells, termed induced pluripotent stem cells (iPSC). This approach also opens possibilities for modelling human diseases in vitro. However, major hurdles remain that restrain fulfilling conventional human iPSC/ESC potential, as they reside in an advanced primed pluripotent state. Such hurdles include limited differentiation capacity and functional variability. Further, in vitro iPSC based research platforms are simplistic and iPSC based “humanized” chimeric mouse models may be of great benefit.
The recent isolation of distinct and new “mouse-like” naive pluripotent states in humans that correspond to earlier embryonic developmental state(s), constitutes a paradigm shift and may alleviate limitations of conventional primed iPSCs/ESCs. Thus, our proposal aims at dissecting the human naïve pluripotent state(s) and to unveil pathways that facilitate their unique identity and flexible programming.
Specific goals: 1) Transcriptional and Epigenetic Design Principles of Human Naïve Pluripotency 2) Signalling Principles Governing Human Naïve Pluripotency Maintenance and Differentiation 3) Defining Functional Competence and Safety of Human Naïve Pluripotent Stem Cells in vitro 4) Novel human naïve iPSC based cross-species chimeric mice for studying human differentiation and disease modelling in vivo. These aims will be conducted by utilizing engineered human iPSC/ESC models, CRISPR/Cas9 genome-wide screening, advanced microscopy and ex-vivo whole embryo culture methods. Our goals will synergistically lead to the design of strategies that will accelerate the safe medical application of human naive pluripotent stem cells and their use in disease specific modelling and applied stem cell research.
Max ERC Funding
2 000 000 €
Duration
Start date: 2017-11-01, End date: 2022-10-31
Project acronym CellTrack
Project Cellular Position Tracking Using DNA Origami Barcodes
Researcher (PI) Björn HÖGBERG
Host Institution (HI) KAROLINSKA INSTITUTET
Call Details Consolidator Grant (CoG), LS7, ERC-2016-COG
Summary The research I propose here will provide an enabling technology; spatially resolved transcriptomics, to address important problems in cell- and developmental-biology, in particular: How are stem cells in the skin and gut proliferating without turning into cancers? How are differentiated cells related, in their transcriptome and spatial positions, to their progenitors?
To investigate these problems on a molecular level and open up paths to find completely new spatiotemporal interdependencies in complex biological systems, I propose to use our newly developed DNA-origami strategy (Benson et al, Nature, 523 p. 441 (2015) ), combined with a combinatorial cloning technique, to build a new method for deep mRNA sequencing of tissue with single-cell resolution. These new types of origami are stable in physiological salt conditions and opens up their use in in-vivo applications.
In DNA-origami we can control the exact spatial position of all nucleotides. By folding the scaffold to display sequences for hybridization of fluorophores conjugated to DNA, we can create optical nano-barcodes. By using structures made out of DNA, the patterns of the optical barcodes will be readable both by imaging and by sequencing, thus enabling the creation of a mapping between cell locations in an organ and the mRNA expression of those cells.
We will use the method to perform spatially resolved transcriptomics in small organs: the mouse hair follicle, and small intestine crypt, and also perform the procedure for multiple samples collected at different time points. This will enable a high-dimensional data analysis that most likely will expose previously unknown dependencies that would provide completely new knowledge about how these biological systems work. By studying these systems, we will uncover much more information on how stem cells contribute to regeneration, the issue of de-differentiation that is a common theme in these organs and the effect this might have on the origin of cancer.
Summary
The research I propose here will provide an enabling technology; spatially resolved transcriptomics, to address important problems in cell- and developmental-biology, in particular: How are stem cells in the skin and gut proliferating without turning into cancers? How are differentiated cells related, in their transcriptome and spatial positions, to their progenitors?
To investigate these problems on a molecular level and open up paths to find completely new spatiotemporal interdependencies in complex biological systems, I propose to use our newly developed DNA-origami strategy (Benson et al, Nature, 523 p. 441 (2015) ), combined with a combinatorial cloning technique, to build a new method for deep mRNA sequencing of tissue with single-cell resolution. These new types of origami are stable in physiological salt conditions and opens up their use in in-vivo applications.
In DNA-origami we can control the exact spatial position of all nucleotides. By folding the scaffold to display sequences for hybridization of fluorophores conjugated to DNA, we can create optical nano-barcodes. By using structures made out of DNA, the patterns of the optical barcodes will be readable both by imaging and by sequencing, thus enabling the creation of a mapping between cell locations in an organ and the mRNA expression of those cells.
We will use the method to perform spatially resolved transcriptomics in small organs: the mouse hair follicle, and small intestine crypt, and also perform the procedure for multiple samples collected at different time points. This will enable a high-dimensional data analysis that most likely will expose previously unknown dependencies that would provide completely new knowledge about how these biological systems work. By studying these systems, we will uncover much more information on how stem cells contribute to regeneration, the issue of de-differentiation that is a common theme in these organs and the effect this might have on the origin of cancer.
Max ERC Funding
1 923 263 €
Duration
Start date: 2017-08-01, End date: 2022-07-31
Project acronym ChAMPioN
Project Game-changing Precision Medicine for Curing All Myeloproliferative Neoplasms
Researcher (PI) Tessa Holyoake
Host Institution (HI) UNIVERSITY OF GLASGOW
Call Details Advanced Grant (AdG), LS7, ERC-2016-ADG
Summary Despite decades of research, developing ways to overcome drug resistance in cancer is the most challenging bottleneck for curative therapies. This is because, in some forms of cancer, the cancer stem cells from which the diseases arise are constantly evolving, particularly under the selective pressures of drug therapies, in order to survive. The events leading to drug resistance can occur within one or more individual cancer stem cell(s) – and the features of each of these cells need to be studied in detail in order to develop drugs or drug combinations that can eradicate all of them. The BCR-ABL+ and BCR-ABL- myeloproliferative neoplasms (MPN) are a group of proliferative blood diseases that can be considered both exemplars of precision medicine and of the drug resistance bottleneck. While significant advances in the management of MPN have been made using life-long and expensive tyrosine kinase inhibitors (TKI), patients are rarely cured of their disease. This is because TKI fail to eradicate the leukaemia stem cells (LSC) from which MPN arise and which persist in patients on treatment, often leading to pervasive drug resistance, loss of response to therapy and progression to fatal forms of acute leukaemia. My goal is to change the way we study the LSC that persist in MPN patients as a means of delivering more effective precision medicine in MPN that is a “game-changer” leading to therapy-free remission (TFR) and cure. Here, I will apply an innovative strategy, ChAMPioN, to study the response of the MPN LSC to TKI in innovative pre-clinical laboratory models and directly in patients with MPN - up to the resolution of individual LSC. This work will reveal, for the first time, the molecular and clonal evolution of LSC during TKI therapies, thus enabling the development of more accurate predictions of TKI efficacy and resistance and rational approaches for curative drug therapies.
Summary
Despite decades of research, developing ways to overcome drug resistance in cancer is the most challenging bottleneck for curative therapies. This is because, in some forms of cancer, the cancer stem cells from which the diseases arise are constantly evolving, particularly under the selective pressures of drug therapies, in order to survive. The events leading to drug resistance can occur within one or more individual cancer stem cell(s) – and the features of each of these cells need to be studied in detail in order to develop drugs or drug combinations that can eradicate all of them. The BCR-ABL+ and BCR-ABL- myeloproliferative neoplasms (MPN) are a group of proliferative blood diseases that can be considered both exemplars of precision medicine and of the drug resistance bottleneck. While significant advances in the management of MPN have been made using life-long and expensive tyrosine kinase inhibitors (TKI), patients are rarely cured of their disease. This is because TKI fail to eradicate the leukaemia stem cells (LSC) from which MPN arise and which persist in patients on treatment, often leading to pervasive drug resistance, loss of response to therapy and progression to fatal forms of acute leukaemia. My goal is to change the way we study the LSC that persist in MPN patients as a means of delivering more effective precision medicine in MPN that is a “game-changer” leading to therapy-free remission (TFR) and cure. Here, I will apply an innovative strategy, ChAMPioN, to study the response of the MPN LSC to TKI in innovative pre-clinical laboratory models and directly in patients with MPN - up to the resolution of individual LSC. This work will reveal, for the first time, the molecular and clonal evolution of LSC during TKI therapies, thus enabling the development of more accurate predictions of TKI efficacy and resistance and rational approaches for curative drug therapies.
Max ERC Funding
3 005 818 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym CHOBOTIX
Project Chemical Processing by Swarm Robotics
Researcher (PI) Frantisek Stepanek
Host Institution (HI) VYSOKA SKOLA CHEMICKO-TECHNOLOGICKA V PRAZE
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary The aim of the project is to develop chemical processing systems based on the principle of swarm robotics. The inspiration for swarm robotics comes from the behaviour of collective organisms – such as bees or ants – that can perform complex tasks by the combined actions of a large number of relatively simple, identical agents. The main scientific challenge of the project will be the design and synthesis of chemical swarm robots (“chobots”), which we envisage as internally structured particulate entities in the 10-100 µm size range that can move in their environment, selectively exchange molecules with their surrounding in response to a local change in temperature or concentration, chemically process those molecules and either accumulate or release the product. Such chemically active autonomous entities can be viewed as very simple pre-biotic life forms, although without the ability to self-replicate or evolve. In the course of the project, the following topics will be explored in detail: (i) the synthesis of suitable shells for chemically active swarm robots, both soft (with a flexible membrane) and hard (porous solid shells); (ii) the mechanisms of molecular transport into and out of such shells and means of its active control; (iii) chemical reaction kinetics in spatially complex compartmental structures within the shells; (iv) collective behaviour of chemical swarm robots and their response to external stimuli. The project will be carried out by a multi-disciplinary team of enthusiastic young researchers and the concepts and technologies developed in course of the project, as well as the advancements in the fundamental understanding of the behaviour of “chemical robots” and their functional sub-systems, will open up new opportunities in diverse areas including next-generation distributed chemical processing, synthesis and delivery of personalised medicines, recovery of valuable chemicals from dilute resources, environmental clean-up, and others.
Summary
The aim of the project is to develop chemical processing systems based on the principle of swarm robotics. The inspiration for swarm robotics comes from the behaviour of collective organisms – such as bees or ants – that can perform complex tasks by the combined actions of a large number of relatively simple, identical agents. The main scientific challenge of the project will be the design and synthesis of chemical swarm robots (“chobots”), which we envisage as internally structured particulate entities in the 10-100 µm size range that can move in their environment, selectively exchange molecules with their surrounding in response to a local change in temperature or concentration, chemically process those molecules and either accumulate or release the product. Such chemically active autonomous entities can be viewed as very simple pre-biotic life forms, although without the ability to self-replicate or evolve. In the course of the project, the following topics will be explored in detail: (i) the synthesis of suitable shells for chemically active swarm robots, both soft (with a flexible membrane) and hard (porous solid shells); (ii) the mechanisms of molecular transport into and out of such shells and means of its active control; (iii) chemical reaction kinetics in spatially complex compartmental structures within the shells; (iv) collective behaviour of chemical swarm robots and their response to external stimuli. The project will be carried out by a multi-disciplinary team of enthusiastic young researchers and the concepts and technologies developed in course of the project, as well as the advancements in the fundamental understanding of the behaviour of “chemical robots” and their functional sub-systems, will open up new opportunities in diverse areas including next-generation distributed chemical processing, synthesis and delivery of personalised medicines, recovery of valuable chemicals from dilute resources, environmental clean-up, and others.
Max ERC Funding
1 644 000 €
Duration
Start date: 2008-06-01, End date: 2013-05-31
Project acronym CIO
Project Common Interactive Objects
Researcher (PI) Susanne Bødker
Host Institution (HI) AARHUS UNIVERSITET
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary In CIO, common interactive objects are developed and explored to extend human control over the technological environment by human beings, both individually and together. CIO leads to a coherent framework of user interfaces to be applied in interaction design. Common interactive objects will provide a useful frame for furthering human computer interaction (HCI) theory, development of interaction design methods and the underlying technical platforms. Common interactive objects will empower users to better understand and develop the technologies they use.
When carried through, the project offers new ways for people to construct and configure human physical and virtual environments, together, over time and within communities.
The main objectives of CIO are to
1. develop the conception of common interactive objects in order to offer a new understanding of human-computer interaction, focusing on human control.
2. develop support for building user interfaces in a coherent and unified framework.
3. make common interactive objects that will empower users to better understand and develop the technologies they use.
4. carry out ground-breaking research regarding the technological basis of common interactive objects with focus on malleability, control and shareability over time.
CIO is methodologically rooted in HCI. CIO’s research methods combine empirical, analytical, theoretical, and design approaches, all with focus on the relationship between common interactive objects and their human users.
CIO presents the idea that common interactive objects may radically innovate our understanding of use and building user interfaces. The gains of CIO will be a coherent new, high-impact way of understanding and building HCI across physical and virtual structures, bringing control back to the users. The risks are in delivering this alternative in a manner that is able to confront the current strong commercial interests in the Internet-of-Things and the 'new' Artificial Intelligence
Summary
In CIO, common interactive objects are developed and explored to extend human control over the technological environment by human beings, both individually and together. CIO leads to a coherent framework of user interfaces to be applied in interaction design. Common interactive objects will provide a useful frame for furthering human computer interaction (HCI) theory, development of interaction design methods and the underlying technical platforms. Common interactive objects will empower users to better understand and develop the technologies they use.
When carried through, the project offers new ways for people to construct and configure human physical and virtual environments, together, over time and within communities.
The main objectives of CIO are to
1. develop the conception of common interactive objects in order to offer a new understanding of human-computer interaction, focusing on human control.
2. develop support for building user interfaces in a coherent and unified framework.
3. make common interactive objects that will empower users to better understand and develop the technologies they use.
4. carry out ground-breaking research regarding the technological basis of common interactive objects with focus on malleability, control and shareability over time.
CIO is methodologically rooted in HCI. CIO’s research methods combine empirical, analytical, theoretical, and design approaches, all with focus on the relationship between common interactive objects and their human users.
CIO presents the idea that common interactive objects may radically innovate our understanding of use and building user interfaces. The gains of CIO will be a coherent new, high-impact way of understanding and building HCI across physical and virtual structures, bringing control back to the users. The risks are in delivering this alternative in a manner that is able to confront the current strong commercial interests in the Internet-of-Things and the 'new' Artificial Intelligence
Max ERC Funding
2 398 993 €
Duration
Start date: 2017-12-01, End date: 2022-11-30
Project acronym CompDB
Project The Computational Database for Real World Awareness
Researcher (PI) Thomas NEUMANN
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary Two major hardware trends have a significant impact on the architecture of database management systems (DBMSs): First, main memory sizes continue to grow significantly. Machines with 1TB of main memory and more are readily available at a relatively low price. Second, the number of cores in a system continues to grow, from currently 64 and more to hundreds in the near future.
This trend offers radically new opportunities for both business and science. It promises to allow for information-at-your-fingertips, i.e., large volumes of data can be analyzed and deeply explored online, in parallel to regular transaction processing. Currently, deep data exploration is performed outside of the database system which necessitates huge data transfers. This impedes the processing such that real-time interactive exploration is impossible. These new hardware capabilities now allow to build a true computational database system that integrates deep exploration functionality at the source of the data. This will lead to a drastic shift in how users interact with data, as for the first time interactive data exploration becomes possible at a massive scale.
Unfortunately, traditional DBMSs are simply not capable to tackle these new challenges.
Traditional techniques like interpreted code execution for query processing become a severe bottleneck in the presence of such massive parallelism, causing poor utilization of the hardware. I pursue a radically different approach: Instead of adapting the traditional, disk-based approaches, I am integrating a new just-in-time compilation framework into the in-memory database that directly exploits the abundant, parallel hardware for large-scale data processing and exploration. By explicitly utilizing cores, I will be able to build a powerful computational database engine that scales the entire spectrum of data processing - from transactional to analytical to exploration workflows - far beyond traditional architectures.
Summary
Two major hardware trends have a significant impact on the architecture of database management systems (DBMSs): First, main memory sizes continue to grow significantly. Machines with 1TB of main memory and more are readily available at a relatively low price. Second, the number of cores in a system continues to grow, from currently 64 and more to hundreds in the near future.
This trend offers radically new opportunities for both business and science. It promises to allow for information-at-your-fingertips, i.e., large volumes of data can be analyzed and deeply explored online, in parallel to regular transaction processing. Currently, deep data exploration is performed outside of the database system which necessitates huge data transfers. This impedes the processing such that real-time interactive exploration is impossible. These new hardware capabilities now allow to build a true computational database system that integrates deep exploration functionality at the source of the data. This will lead to a drastic shift in how users interact with data, as for the first time interactive data exploration becomes possible at a massive scale.
Unfortunately, traditional DBMSs are simply not capable to tackle these new challenges.
Traditional techniques like interpreted code execution for query processing become a severe bottleneck in the presence of such massive parallelism, causing poor utilization of the hardware. I pursue a radically different approach: Instead of adapting the traditional, disk-based approaches, I am integrating a new just-in-time compilation framework into the in-memory database that directly exploits the abundant, parallel hardware for large-scale data processing and exploration. By explicitly utilizing cores, I will be able to build a powerful computational database engine that scales the entire spectrum of data processing - from transactional to analytical to exploration workflows - far beyond traditional architectures.
Max ERC Funding
1 918 750 €
Duration
Start date: 2017-06-01, End date: 2022-05-31
Project acronym COMPECON
Project Complexity and Simplicity in Economic Mechanisms
Researcher (PI) Noam NISAN
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary As more and more economic activity is moving to the Internet, familiar economic mechanisms are being deployed
at unprecedented scales of size, speed, and complexity. In many cases this new complexity becomes the defining
feature of the deployed economic mechanism and the quantitative difference becomes a key qualitative one.
A well-studied example of such situations is how the humble single-item auction suddenly becomes a
billion-times repeated online ad auction, or even becomes a combinatorial auction with exponentially
many possible outcomes. Similar complexity explosions occur with various markets, with information
dissemination, with pricing structures, and with many other economic mechanisms.
The aim of this proposal is to study the role and implications of such complexity and to start
developing a coherent economic theory that can handle it. We aim to identify various measures of
complexity that are crucial bottlenecks and study them. Examples of such complexities include the
amount of access to data, the length of the description of a mechanism, its communication requirements,
the cognitive complexity required from users, and, of course, the associated computational complexity.
On one hand we will attempt finding ways of effectively dealing with complexity when it is needed, and on
the other hand, attempt avoiding complexity, when possible, replacing it with ``simple'' alternatives
without incurring too large of a loss.
Summary
As more and more economic activity is moving to the Internet, familiar economic mechanisms are being deployed
at unprecedented scales of size, speed, and complexity. In many cases this new complexity becomes the defining
feature of the deployed economic mechanism and the quantitative difference becomes a key qualitative one.
A well-studied example of such situations is how the humble single-item auction suddenly becomes a
billion-times repeated online ad auction, or even becomes a combinatorial auction with exponentially
many possible outcomes. Similar complexity explosions occur with various markets, with information
dissemination, with pricing structures, and with many other economic mechanisms.
The aim of this proposal is to study the role and implications of such complexity and to start
developing a coherent economic theory that can handle it. We aim to identify various measures of
complexity that are crucial bottlenecks and study them. Examples of such complexities include the
amount of access to data, the length of the description of a mechanism, its communication requirements,
the cognitive complexity required from users, and, of course, the associated computational complexity.
On one hand we will attempt finding ways of effectively dealing with complexity when it is needed, and on
the other hand, attempt avoiding complexity, when possible, replacing it with ``simple'' alternatives
without incurring too large of a loss.
Max ERC Funding
2 026 706 €
Duration
Start date: 2017-05-01, End date: 2022-04-30
Project acronym COMPUSAPIEN
Project Computing Server Architecture with Joint Power and Cooling Integration at the Nanoscale
Researcher (PI) David ATIENZA ALONSO
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary The soaring demand for computing power in the last years has grown faster than semiconductor technology evolution can sustain, and has produced as collateral undesirable effect a surge in power consumption and heat density in computing servers. Although computing servers are the foundations of the digital revolution, their current designs require 30-40% of the energy supplied to be dissipated in cooling. The remaining energy is used for computation, but their complex many-core designs produce very high operating temperatures. Thus, operating all the cores continuously at maximum performance levels results in system overheating and failures. This situation is limiting the benefits of technology scaling.
The COMPUSAPIEN proposal aims to completely revise the current computing server architecture. In particular, inspired by the mammalian brain, this proposal targets to design a disruptive three-dimensional (3D) computing server architecture that overcomes the prevailing worst-case power and cooling provisioning paradigm for servers. This new 3D server design champions a heterogeneous many-core architecture template with an integrated on-chip microfluidic fuel cell network for joint cooling delivery and power supply. Also, it will include a novel predictive controller based on holistic power-temperature models, which exploit the server software stack to achieve energy-scalable computing capabilities. Because of its integrated electronic-electrochemical architecture design, COMPUSAPIEN is clearly a high-risk high-reward proposal that will bring drastic energy savings with respect to current server design approaches, and will guarantee energy scalability in future server architectures. To realize this vision, COMPUSAPIEN will develop and integrate breakthrough innovations in heterogeneous computing architectures, cooling-power subsystem design, combined microfluidic power delivery and temperature management in computers.
Summary
The soaring demand for computing power in the last years has grown faster than semiconductor technology evolution can sustain, and has produced as collateral undesirable effect a surge in power consumption and heat density in computing servers. Although computing servers are the foundations of the digital revolution, their current designs require 30-40% of the energy supplied to be dissipated in cooling. The remaining energy is used for computation, but their complex many-core designs produce very high operating temperatures. Thus, operating all the cores continuously at maximum performance levels results in system overheating and failures. This situation is limiting the benefits of technology scaling.
The COMPUSAPIEN proposal aims to completely revise the current computing server architecture. In particular, inspired by the mammalian brain, this proposal targets to design a disruptive three-dimensional (3D) computing server architecture that overcomes the prevailing worst-case power and cooling provisioning paradigm for servers. This new 3D server design champions a heterogeneous many-core architecture template with an integrated on-chip microfluidic fuel cell network for joint cooling delivery and power supply. Also, it will include a novel predictive controller based on holistic power-temperature models, which exploit the server software stack to achieve energy-scalable computing capabilities. Because of its integrated electronic-electrochemical architecture design, COMPUSAPIEN is clearly a high-risk high-reward proposal that will bring drastic energy savings with respect to current server design approaches, and will guarantee energy scalability in future server architectures. To realize this vision, COMPUSAPIEN will develop and integrate breakthrough innovations in heterogeneous computing architectures, cooling-power subsystem design, combined microfluidic power delivery and temperature management in computers.
Max ERC Funding
1 999 281 €
Duration
Start date: 2017-06-01, End date: 2022-05-31
Project acronym ContraNPM1AML
Project Dissecting to hit the therapeutic targets in nucleophosmin (NPM1)-mutated acute myeloid leukemia
Researcher (PI) Maria Paola MARTELLI
Host Institution (HI) UNIVERSITA DEGLI STUDI DI PERUGIA
Call Details Consolidator Grant (CoG), LS7, ERC-2016-COG
Summary Acute myeloid leukemia (AML) is a group of hematologic malignancies which, due to their molecular and clinical heterogeneity, have been traditionally difficult to classify and treat. Recently, next-generation, whole-genome sequencing has uncovered several recurrent somatic mutations that better define the landscape of AML genomics. Despite these advances in deciphering AML molecular subsets, there have been no concurrent improvements in AML therapy which still relies on the ‘antracycline+cytarabine’ scheme. Hereto, only about 40-50% of adult young patients are cured whilst most of the elderly succumb to their disease. Therefore, new therapeutic approaches which would take advantage of the new discoveries are clearly needed. In the past years, we discovered and characterized nucleophosmin (NPM1) mutations as the most frequent genetic alteration (about 30%) in AML, and today NPM1-mutated AML is a new entity in the WHO classification of myeloid neoplasms. However, mechanisms of leukemogenesis and a specific therapy for this leukemia are missing. Here, I aim to unravel the complex network of molecular interactions that take place in this distinct genetic subtype, and find their vulnerabilities to identify new targets for therapy. To address this issue, I will avail of relevant pre-clinical models developed in our laboratories and propose two complementary strategies: 1) a screening-based approach, focused either on the target, by analyzing synthetic lethal interactions through CRISPR-based genome-wide interference, or on the drug, by high-throughput chemical libraries screenings; 2) a hypothesis-driven approach, based on our recent gained novel insights on the role of specific intracellular pathways/genes in NPM1-mutated AML and on pharmacological studies with ‘old’ drugs, which we have revisited in the specific AML genetic context. I expect our discoveries will lead to find novel therapeutic approaches and make clinical trials available to patients as soon as possible.
Summary
Acute myeloid leukemia (AML) is a group of hematologic malignancies which, due to their molecular and clinical heterogeneity, have been traditionally difficult to classify and treat. Recently, next-generation, whole-genome sequencing has uncovered several recurrent somatic mutations that better define the landscape of AML genomics. Despite these advances in deciphering AML molecular subsets, there have been no concurrent improvements in AML therapy which still relies on the ‘antracycline+cytarabine’ scheme. Hereto, only about 40-50% of adult young patients are cured whilst most of the elderly succumb to their disease. Therefore, new therapeutic approaches which would take advantage of the new discoveries are clearly needed. In the past years, we discovered and characterized nucleophosmin (NPM1) mutations as the most frequent genetic alteration (about 30%) in AML, and today NPM1-mutated AML is a new entity in the WHO classification of myeloid neoplasms. However, mechanisms of leukemogenesis and a specific therapy for this leukemia are missing. Here, I aim to unravel the complex network of molecular interactions that take place in this distinct genetic subtype, and find their vulnerabilities to identify new targets for therapy. To address this issue, I will avail of relevant pre-clinical models developed in our laboratories and propose two complementary strategies: 1) a screening-based approach, focused either on the target, by analyzing synthetic lethal interactions through CRISPR-based genome-wide interference, or on the drug, by high-throughput chemical libraries screenings; 2) a hypothesis-driven approach, based on our recent gained novel insights on the role of specific intracellular pathways/genes in NPM1-mutated AML and on pharmacological studies with ‘old’ drugs, which we have revisited in the specific AML genetic context. I expect our discoveries will lead to find novel therapeutic approaches and make clinical trials available to patients as soon as possible.
Max ERC Funding
1 883 750 €
Duration
Start date: 2017-04-01, End date: 2022-03-31
Project acronym CSP
Project Cross-Layer Design of Securing Positioning
Researcher (PI) Srdan Capkun
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary With the development of new location-based services and the expected deployment of cyber-physical systems (e.g., autonomous cars and drones) the reliance on location and time information in critical applications will only increase. Today's positioning systems are vulnerable to location spoofing by which devices can cheat on their own positions or can manipulate the measured positions of other devices. This problem cannot be fixed by a simple upgrade - existing positioning systems rely on legacy distance measurement techniques and protocols that were designed without security considerations or with security as an after-thought. We therefore need a new approach to the design of positioning systems that takes security requirements into account from the very start, and also accounts for the way that positioning systems are built and used. This is a cross-layer endeavor. In this project we will address the following fundamental questions: (1) Physical Layer. How can we design the right distance measurement (i.e., distance bounding) techniques that provide resilience to physical-layer and logical-layer attacks but retain the performance (range, accuracy and speed of execution) of equivalent non-secure systems? We will extend the existing knowledge in terms of the attacker models as well as achievable limits of security and performance of distance measurement techniques under realistic attacker models. (2) Link Layer. What are the right Medium Access Control (MAC) protocols for secure positioning, what are their performance and scalability limits? (3) Systems. How can distance bounding be integrated in mobile platforms, especially with trusted execution environments. How can this integration strengthen the security of distance bounding and support its use in a wide range of applications?
Summary
With the development of new location-based services and the expected deployment of cyber-physical systems (e.g., autonomous cars and drones) the reliance on location and time information in critical applications will only increase. Today's positioning systems are vulnerable to location spoofing by which devices can cheat on their own positions or can manipulate the measured positions of other devices. This problem cannot be fixed by a simple upgrade - existing positioning systems rely on legacy distance measurement techniques and protocols that were designed without security considerations or with security as an after-thought. We therefore need a new approach to the design of positioning systems that takes security requirements into account from the very start, and also accounts for the way that positioning systems are built and used. This is a cross-layer endeavor. In this project we will address the following fundamental questions: (1) Physical Layer. How can we design the right distance measurement (i.e., distance bounding) techniques that provide resilience to physical-layer and logical-layer attacks but retain the performance (range, accuracy and speed of execution) of equivalent non-secure systems? We will extend the existing knowledge in terms of the attacker models as well as achievable limits of security and performance of distance measurement techniques under realistic attacker models. (2) Link Layer. What are the right Medium Access Control (MAC) protocols for secure positioning, what are their performance and scalability limits? (3) Systems. How can distance bounding be integrated in mobile platforms, especially with trusted execution environments. How can this integration strengthen the security of distance bounding and support its use in a wide range of applications?
Max ERC Funding
1 952 274 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym CUTACOMBS
Project Cuts and decompositions: algorithms and combinatorial properties
Researcher (PI) Marcin PILIPCZUK
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary In this proposal we plan to extend mathematical foundations of algorithms for various variants of the minimum cut problem within theoretical computer science.
Recent advances in understanding the structure of small cuts and tractability of cut problems resulted in a mature algorithmic toolbox for undirected graphs under the paradigm of parameterized complexity. In this position, we now aim at a full understanding of the tractability of cut problems in the more challenging case of directed graphs, and see opportunities to apply the aforementioned successful structural approach to advance on major open problems in other paradigms in theoretical computer science.
The specific goals of the project are grouped in the following three themes.
Directed graphs. Chart the parameterized complexity of graph separation problems in directed graphs and provide a fixed-parameter tractability toolbox, equally deep as the one in undirected graphs. Provide tractability foundations for routing problems in directed graphs, such as the disjoint paths problem with symmetric demands.
Planar graphs. Resolve main open problems with respect to network design and graph separation problems in planar graphs under the following three paradigms: parameterized complexity, approximation schemes, and cut/flow/distance sparsifiers. Recently discovered connections uncover significant potential in synergy between these three algorithmic approaches.
Tree decompositions. Show improved tractability of graph isomorphism testing in sparse graph classes. Combine the algorithmic toolbox of parameterized complexity with the theory of minimal triangulations to advance our knowledge in structural graph theory, both pure (focused on the Erdos-Hajnal conjecture) and algorithmic (focused on the tractability of Maximum Independent Set and 3-Coloring).
Summary
In this proposal we plan to extend mathematical foundations of algorithms for various variants of the minimum cut problem within theoretical computer science.
Recent advances in understanding the structure of small cuts and tractability of cut problems resulted in a mature algorithmic toolbox for undirected graphs under the paradigm of parameterized complexity. In this position, we now aim at a full understanding of the tractability of cut problems in the more challenging case of directed graphs, and see opportunities to apply the aforementioned successful structural approach to advance on major open problems in other paradigms in theoretical computer science.
The specific goals of the project are grouped in the following three themes.
Directed graphs. Chart the parameterized complexity of graph separation problems in directed graphs and provide a fixed-parameter tractability toolbox, equally deep as the one in undirected graphs. Provide tractability foundations for routing problems in directed graphs, such as the disjoint paths problem with symmetric demands.
Planar graphs. Resolve main open problems with respect to network design and graph separation problems in planar graphs under the following three paradigms: parameterized complexity, approximation schemes, and cut/flow/distance sparsifiers. Recently discovered connections uncover significant potential in synergy between these three algorithmic approaches.
Tree decompositions. Show improved tractability of graph isomorphism testing in sparse graph classes. Combine the algorithmic toolbox of parameterized complexity with the theory of minimal triangulations to advance our knowledge in structural graph theory, both pure (focused on the Erdos-Hajnal conjecture) and algorithmic (focused on the tractability of Maximum Independent Set and 3-Coloring).
Max ERC Funding
1 228 250 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym D3
Project Interpreting Drawings for 3D Design
Researcher (PI) Adrien BOUSSEAU
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary Designers draw extensively to externalize their ideas and communicate with others. However, drawings are currently not directly interpretable by computers. To test their ideas against physical reality, designers have to create 3D models suitable for simulation and 3D printing. However, the visceral and approximate nature of drawing clashes with the tediousness and rigidity of 3D modeling. As a result, designers only model finalized concepts, and have no feedback on feasibility during creative exploration.
Our ambition is to bring the power of 3D engineering tools to the creative phase of design by automatically estimating 3D models from drawings. However, this problem is ill-posed: a point in the drawing can lie anywhere in depth. Existing solutions are limited to simple shapes, or require user input to “explain” to the computer how to interpret the drawing. Our originality is to exploit professional drawing techniques that designers developed to communicate shape most efficiently. Each technique provides geometric constraints that help viewers understand drawings, and that we shall leverage for 3D reconstruction.
Our first challenge is to formalize common drawing techniques and derive how they constrain 3D shape. Our second challenge is to identify which techniques are used in a drawing. We cast this problem as the joint optimization of discrete variables indicating which constraints apply, and continuous variables representing the 3D model that best satisfies these constraints. But evaluating all constraint configurations is impractical. To solve this inverse problem, we will first develop forward algorithms that synthesize drawings from 3D models. Our idea is to use this synthetic data to train machine learning algorithms that predict the likelihood that constraints apply in a given drawing.
In addition to tackling the long-standing problem of single-image 3D reconstruction, our research will significantly tighten design and engineering for rapid prototyping.
Summary
Designers draw extensively to externalize their ideas and communicate with others. However, drawings are currently not directly interpretable by computers. To test their ideas against physical reality, designers have to create 3D models suitable for simulation and 3D printing. However, the visceral and approximate nature of drawing clashes with the tediousness and rigidity of 3D modeling. As a result, designers only model finalized concepts, and have no feedback on feasibility during creative exploration.
Our ambition is to bring the power of 3D engineering tools to the creative phase of design by automatically estimating 3D models from drawings. However, this problem is ill-posed: a point in the drawing can lie anywhere in depth. Existing solutions are limited to simple shapes, or require user input to “explain” to the computer how to interpret the drawing. Our originality is to exploit professional drawing techniques that designers developed to communicate shape most efficiently. Each technique provides geometric constraints that help viewers understand drawings, and that we shall leverage for 3D reconstruction.
Our first challenge is to formalize common drawing techniques and derive how they constrain 3D shape. Our second challenge is to identify which techniques are used in a drawing. We cast this problem as the joint optimization of discrete variables indicating which constraints apply, and continuous variables representing the 3D model that best satisfies these constraints. But evaluating all constraint configurations is impractical. To solve this inverse problem, we will first develop forward algorithms that synthesize drawings from 3D models. Our idea is to use this synthetic data to train machine learning algorithms that predict the likelihood that constraints apply in a given drawing.
In addition to tackling the long-standing problem of single-image 3D reconstruction, our research will significantly tighten design and engineering for rapid prototyping.
Max ERC Funding
1 482 761 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym DCBIF
Project Flight dynamics and control of birds and insects
Researcher (PI) Graham Keith Taylor
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary Insects bristle with sensors, but how do they exploit this rich sensory information to achieve their extraordinary stability and manoeuvrability? Bird and insect wings deform in flight, and have passively deployable structures such as feathers and flaps, but how do they exploit these features when aircraft designers shy away from aeroelasticity? Birds fly without a vertical tailfin, but how do they maintain yaw stability when most aircraft require one to fly safely? Questions such as these drive my research on bird and insect flight dynamics. My research is unique in using the engineering tools of flight dynamics and control theory to analyse physiological and biomechanical data from real animals. One research track will use measurements of the forces and torques generated by insects flying tethered in a virtual-reality flight simulator to parameterise their equations of motion, in order to model the input-output relationships of their sensorimotor control systems. A second research track will measure the detailed wing kinematics and deformations of free-flying insects in order to analyse the effects of aeroelasticity on flight manoeuvres. A third research track will measure the wing and tail kinematics of free-flying birds using onboard wireless video cameras, and use system identification techniques to model how these affect the body dynamics measured using onboard instrumentation. Applying these novel experimental techniques will allow me to make and test quantitative predictions about flight stability and control. This highly interdisciplinary research bridges the fields of physiology and biomechanics, with significant feeds to and from engineering. My research will break new ground, developing novel experimental techniques and theoretical models in order to test and generate new hypotheses of adaptive function. Its broader impacts include the public interest in all things flying, and potential military and civilian applications in flapping micro-air vehicles.
Summary
Insects bristle with sensors, but how do they exploit this rich sensory information to achieve their extraordinary stability and manoeuvrability? Bird and insect wings deform in flight, and have passively deployable structures such as feathers and flaps, but how do they exploit these features when aircraft designers shy away from aeroelasticity? Birds fly without a vertical tailfin, but how do they maintain yaw stability when most aircraft require one to fly safely? Questions such as these drive my research on bird and insect flight dynamics. My research is unique in using the engineering tools of flight dynamics and control theory to analyse physiological and biomechanical data from real animals. One research track will use measurements of the forces and torques generated by insects flying tethered in a virtual-reality flight simulator to parameterise their equations of motion, in order to model the input-output relationships of their sensorimotor control systems. A second research track will measure the detailed wing kinematics and deformations of free-flying insects in order to analyse the effects of aeroelasticity on flight manoeuvres. A third research track will measure the wing and tail kinematics of free-flying birds using onboard wireless video cameras, and use system identification techniques to model how these affect the body dynamics measured using onboard instrumentation. Applying these novel experimental techniques will allow me to make and test quantitative predictions about flight stability and control. This highly interdisciplinary research bridges the fields of physiology and biomechanics, with significant feeds to and from engineering. My research will break new ground, developing novel experimental techniques and theoretical models in order to test and generate new hypotheses of adaptive function. Its broader impacts include the public interest in all things flying, and potential military and civilian applications in flapping micro-air vehicles.
Max ERC Funding
1 954 565 €
Duration
Start date: 2008-06-01, End date: 2014-05-31
Project acronym DeepFace
Project Understanding Deep Face Recognition
Researcher (PI) Lior Wolf
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary Face recognition is a fascinating domain: no other domain seems to present as much value when analysing casual photos; it is one of the few domains in machine learning in which millions of classes are routinely learned; and the trade-off between subtle inter-identity variations and pronounced intra-identity variations forms a unique challenge.
The advent of deep learning has brought machines to what is considered a human level of performance. However, there are many research questions that are left open. At the top most level, we ask two questions: what is unique about faces in comparison to other recognition tasks that also employ deep networks and how can we make the next leap in performance of automatic face recognition?
We consider three domains of research. The first is the study of methods that promote effective transfer learning. This is crucial since all state of the art face recognition methods rely on transfer learning. The second domain is the study of the tradeoffs that govern the optimal utilization of the training data and how the properties of the training data affect the optimal network design. The third domain is the post transfer utilization of the learned deep networks, where given the representations of a pair of face images, we seek to compare them in the most accurate way.
Throughout this proposal, we put an emphasis on theoretical reasoning. I aim to support the developed methods by a theoretical framework that would both justify their usage as well as provide concrete guidelines for using them. My goal of achieving a leap forward in performance through a level of theoretical analysis that is unparalleled in object recognition, makes our research agenda truly high-risk/ high-gains. I have been in the forefront of face recognition for the last 8 years and my lab's recent achievements in deep learning suggest that we will be able to carry out this research. To further support its feasibility, we present very promising initial results.
Summary
Face recognition is a fascinating domain: no other domain seems to present as much value when analysing casual photos; it is one of the few domains in machine learning in which millions of classes are routinely learned; and the trade-off between subtle inter-identity variations and pronounced intra-identity variations forms a unique challenge.
The advent of deep learning has brought machines to what is considered a human level of performance. However, there are many research questions that are left open. At the top most level, we ask two questions: what is unique about faces in comparison to other recognition tasks that also employ deep networks and how can we make the next leap in performance of automatic face recognition?
We consider three domains of research. The first is the study of methods that promote effective transfer learning. This is crucial since all state of the art face recognition methods rely on transfer learning. The second domain is the study of the tradeoffs that govern the optimal utilization of the training data and how the properties of the training data affect the optimal network design. The third domain is the post transfer utilization of the learned deep networks, where given the representations of a pair of face images, we seek to compare them in the most accurate way.
Throughout this proposal, we put an emphasis on theoretical reasoning. I aim to support the developed methods by a theoretical framework that would both justify their usage as well as provide concrete guidelines for using them. My goal of achieving a leap forward in performance through a level of theoretical analysis that is unparalleled in object recognition, makes our research agenda truly high-risk/ high-gains. I have been in the forefront of face recognition for the last 8 years and my lab's recent achievements in deep learning suggest that we will be able to carry out this research. To further support its feasibility, we present very promising initial results.
Max ERC Funding
1 696 888 €
Duration
Start date: 2017-05-01, End date: 2022-04-30
Project acronym DisDyn
Project Distributed and Dynamic Graph Algorithms and Complexity
Researcher (PI) Danupon NA NONGKAI
Host Institution (HI) KUNGLIGA TEKNISKA HOEGSKOLAN
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary This project aims to (i) resolve challenging graph problems in distributed and dynamic settings, with a focus on connectivity problems (such as computing edge connectivity and distances), and (ii) on the way develop a systematic approach to attack problems in these settings, by thoroughly exploring relevant algorithmic and complexity-theoretic landscapes. Tasks include
- building a hierarchy of intermediate computational models so that designing algorithms and proving lower bounds can be done in several intermediate steps,
- explaining the limits of algorithms by proving conditional lower bounds based on old and new reasonable conjectures, and
- connecting techniques in the two settings to generate new insights that are unlikely to emerge from the isolated viewpoint of a single field.
The project will take advantage from and contribute to the developments in many young fields in theoretical computer science, such as fine-grained complexity and sublinear algorithms. Resolving one of the connectivity problems will already be a groundbreaking result. However, given the approach, it is likely that one breakthrough will lead to many others.
Summary
This project aims to (i) resolve challenging graph problems in distributed and dynamic settings, with a focus on connectivity problems (such as computing edge connectivity and distances), and (ii) on the way develop a systematic approach to attack problems in these settings, by thoroughly exploring relevant algorithmic and complexity-theoretic landscapes. Tasks include
- building a hierarchy of intermediate computational models so that designing algorithms and proving lower bounds can be done in several intermediate steps,
- explaining the limits of algorithms by proving conditional lower bounds based on old and new reasonable conjectures, and
- connecting techniques in the two settings to generate new insights that are unlikely to emerge from the isolated viewpoint of a single field.
The project will take advantage from and contribute to the developments in many young fields in theoretical computer science, such as fine-grained complexity and sublinear algorithms. Resolving one of the connectivity problems will already be a groundbreaking result. However, given the approach, it is likely that one breakthrough will lead to many others.
Max ERC Funding
1 500 000 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym DrugComb
Project Informatics approaches for the rational selection of personalized cancer drug combinations
Researcher (PI) Jing TANG
Host Institution (HI) HELSINGIN YLIOPISTO
Call Details Starting Grant (StG), LS7, ERC-2016-STG
Summary Making cancer treatment more personalized and effective is one of the grand challenges in our health care system. However, many drugs have entered clinical trials but so far showed limited efficacy or induced rapid development of resistance. We critically need multi-targeted drug combinations, which shall selectively inhibit the cancer cells and block the emergence of drug resistance. This project will develop mathematical and computational tools to identify drug combinations that can be used to provide personalized and more effective therapeutic strategies that may prevent acquired resistance. Utilizing molecular profiling and pharmacological screening data from patient-derived leukaemia and ovarian cancer samples, I will develop model-based clustering methods for identification of patient subgroups that are differentially responsive to first-line chemotherapy. For patients resistant to chemotherapy, I will develop network modelling approaches to predict the most potential drug combinations by understanding the underlying drug target interactions. The drug combination prediction will be made for each patient and will be validated using a preclinical drug testing platform on patient samples. I will explore the drug combination screen data to identify significant synergy at the therapeutically relevant doses. The drug combination hits will be mapped into signalling networks to infer their mechanisms. Drug combinations with selective efficacy in individual patient samples or in sample subgroups will be further translated into in treatment options by clinical collaborators. This will lead to novel and personalized strategies to treat cancer patients.
Summary
Making cancer treatment more personalized and effective is one of the grand challenges in our health care system. However, many drugs have entered clinical trials but so far showed limited efficacy or induced rapid development of resistance. We critically need multi-targeted drug combinations, which shall selectively inhibit the cancer cells and block the emergence of drug resistance. This project will develop mathematical and computational tools to identify drug combinations that can be used to provide personalized and more effective therapeutic strategies that may prevent acquired resistance. Utilizing molecular profiling and pharmacological screening data from patient-derived leukaemia and ovarian cancer samples, I will develop model-based clustering methods for identification of patient subgroups that are differentially responsive to first-line chemotherapy. For patients resistant to chemotherapy, I will develop network modelling approaches to predict the most potential drug combinations by understanding the underlying drug target interactions. The drug combination prediction will be made for each patient and will be validated using a preclinical drug testing platform on patient samples. I will explore the drug combination screen data to identify significant synergy at the therapeutically relevant doses. The drug combination hits will be mapped into signalling networks to infer their mechanisms. Drug combinations with selective efficacy in individual patient samples or in sample subgroups will be further translated into in treatment options by clinical collaborators. This will lead to novel and personalized strategies to treat cancer patients.
Max ERC Funding
1 500 000 €
Duration
Start date: 2017-06-01, End date: 2022-05-31
Project acronym ECC SCIENG
Project Error-correcting codes and their applications in Science and Engineering
Researcher (PI) Mohammad Amin Shokrollahi
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Advanced Grant (AdG), PE6, ERC-2008-AdG
Summary Error correcting codes are combinatorial objects which have traditionally been used to enhance the transmission of data on unreliable media. They have experienced a phenomenal growth since their birth some fifty years ago. Today, everyday tasks such as listening to a CD, accessing the hard disk of an electronic device, talking on a wireless phone, or downloading files from the Internet are impossible without the use of error-correcting codes. Though traditional communication still occupies centerstage in the realm of applied coding theory, emerging applications are changing the rules of the game, and calling for a new type of coding theory capable of addressing future needs. These are not limited to physical applications, however. In fact, coding theory is an integral part of solutions offered by researchers outside traditional physical communication to solve fundamental problems of interest, such as the complexity of computation, reliable transfer of bulk data, cryptographic protocols, self correcting software, signal processing, or even computational biology.While research in the past fifty years has put traditional coding theory on firm theoretical grounds, emerging applications are in need of new tools and methods to design, analyze, and implement coding technologies capable of dealing with future needs. This is the main concern of the present proposal. To strike the right balance between length and impact we have identified five areas of research that span the full spectrum of coding theory ranging from fundamental theoretical aspects to practical applications. We set out to develop new theoretical and practical models for the design and analysis of codes, and explore new application areas hitherto untouched. A unique feature of this proposal is our choice of the tools, ranging from classical areas of algebra, combinatorics, and probability theory, to ideas and methods from theoretical computer science.
Summary
Error correcting codes are combinatorial objects which have traditionally been used to enhance the transmission of data on unreliable media. They have experienced a phenomenal growth since their birth some fifty years ago. Today, everyday tasks such as listening to a CD, accessing the hard disk of an electronic device, talking on a wireless phone, or downloading files from the Internet are impossible without the use of error-correcting codes. Though traditional communication still occupies centerstage in the realm of applied coding theory, emerging applications are changing the rules of the game, and calling for a new type of coding theory capable of addressing future needs. These are not limited to physical applications, however. In fact, coding theory is an integral part of solutions offered by researchers outside traditional physical communication to solve fundamental problems of interest, such as the complexity of computation, reliable transfer of bulk data, cryptographic protocols, self correcting software, signal processing, or even computational biology.While research in the past fifty years has put traditional coding theory on firm theoretical grounds, emerging applications are in need of new tools and methods to design, analyze, and implement coding technologies capable of dealing with future needs. This is the main concern of the present proposal. To strike the right balance between length and impact we have identified five areas of research that span the full spectrum of coding theory ranging from fundamental theoretical aspects to practical applications. We set out to develop new theoretical and practical models for the design and analysis of codes, and explore new application areas hitherto untouched. A unique feature of this proposal is our choice of the tools, ranging from classical areas of algebra, combinatorics, and probability theory, to ideas and methods from theoretical computer science.
Max ERC Funding
1 959 998 €
Duration
Start date: 2009-04-01, End date: 2013-03-31
Project acronym ECSTATIC
Project Electrostructural Tomography – Towards Multiparametric Imaging of Cardiac Electrical Disorders
Researcher (PI) Hubert, Yann, Marie COCHET
Host Institution (HI) UNIVERSITE DE BORDEAUX
Call Details Starting Grant (StG), LS7, ERC-2016-STG
Summary Cardiac electrical diseases are directly responsible for sudden cardiac death, heart failure and stroke. They result from a complex interplay between myocardial electrical activation and structural heterogeneity. Current diagnostic strategy based on separate electrocardiographic and imaging assessment is unable to grasp both these aspects. Improvements in personalised diagnostics are urgently needed as existing curative or preventive therapies (catheter ablation, multisite pacing, and implantable defibrillators) cannot be offered until patients are correctly recognised.
My aim is to achieve a major advance in the way cardiac electrical diseases are characterised and thus diagnosed and treated, through the development of a novel non-invasive modality (Electrostructural Tomography), combining magnetic resonance imaging (MRI) and non-invasive cardiac mapping (NIM) technologies.
The approach will consist of: (1) hybridising NIM and MRI technologies to enable the joint acquisition of magnetic resonance images of the heart and torso and of a large array of body surface potentials within a single environment; (2) personalising the inverse problem of electrocardiography based on MRI characteristics within the heart and torso, to enable accurate reconstruction of cardiac electrophysiological maps from body surface potentials within the 3D cardiac tissue; and (3) developing a novel disease characterisation framework based on registered non-invasive imaging and electrophysiological data, and propose novel diagnostic and prognostic markers.
This project will dramatically impact the tailored management of cardiac electrical disorders, with applications for diagnosis, risk stratification/patient selection and guidance of pacing and catheter ablation therapies. It will bridge two medical fields (cardiac electrophysiology and imaging), thereby creating a new research area and a novel semiology with the potential to modify the existing classification of cardiac electrical diseases.
Summary
Cardiac electrical diseases are directly responsible for sudden cardiac death, heart failure and stroke. They result from a complex interplay between myocardial electrical activation and structural heterogeneity. Current diagnostic strategy based on separate electrocardiographic and imaging assessment is unable to grasp both these aspects. Improvements in personalised diagnostics are urgently needed as existing curative or preventive therapies (catheter ablation, multisite pacing, and implantable defibrillators) cannot be offered until patients are correctly recognised.
My aim is to achieve a major advance in the way cardiac electrical diseases are characterised and thus diagnosed and treated, through the development of a novel non-invasive modality (Electrostructural Tomography), combining magnetic resonance imaging (MRI) and non-invasive cardiac mapping (NIM) technologies.
The approach will consist of: (1) hybridising NIM and MRI technologies to enable the joint acquisition of magnetic resonance images of the heart and torso and of a large array of body surface potentials within a single environment; (2) personalising the inverse problem of electrocardiography based on MRI characteristics within the heart and torso, to enable accurate reconstruction of cardiac electrophysiological maps from body surface potentials within the 3D cardiac tissue; and (3) developing a novel disease characterisation framework based on registered non-invasive imaging and electrophysiological data, and propose novel diagnostic and prognostic markers.
This project will dramatically impact the tailored management of cardiac electrical disorders, with applications for diagnosis, risk stratification/patient selection and guidance of pacing and catheter ablation therapies. It will bridge two medical fields (cardiac electrophysiology and imaging), thereby creating a new research area and a novel semiology with the potential to modify the existing classification of cardiac electrical diseases.
Max ERC Funding
1 475 000 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym ECSUB
Project Encoded Cellular Synthesis of Unnatural Biopolymers
Researcher (PI) Jason William Karl Chin
Host Institution (HI) MEDICAL RESEARCH COUNCIL
Call Details Starting Grant (StG), LS7, ERC-2007-StG
Summary We are building a parallel and independent (orthogonal) translational machinery for the encoded biosynthesis of unnatural polymers in living cells. The orthogonal translation system has many potential applications beyond those possible with the natural translation system: I propose to use it: 1) To expand the chemical scope of monomers that can be polymerized by the ribosome in living cells, allowing the incorporation of monomers with unnatural backbones into proteins; 2) To increase the efficiency of in vivo unnatural amino acid mutagenesis via amber suppression, so that no truncated protein is produced and multi-site incorporation of unnatural amino acids is possible; 3) To create probes of protein function for use in vivo; 4) To free numerous codons for simultaneous encoding of multiple distinct unnatural monomers, and to experimentally explore alternate genetic codes; 5) To explore the evolution of encoded unnatural polymers toward new cellular functions.
Summary
We are building a parallel and independent (orthogonal) translational machinery for the encoded biosynthesis of unnatural polymers in living cells. The orthogonal translation system has many potential applications beyond those possible with the natural translation system: I propose to use it: 1) To expand the chemical scope of monomers that can be polymerized by the ribosome in living cells, allowing the incorporation of monomers with unnatural backbones into proteins; 2) To increase the efficiency of in vivo unnatural amino acid mutagenesis via amber suppression, so that no truncated protein is produced and multi-site incorporation of unnatural amino acids is possible; 3) To create probes of protein function for use in vivo; 4) To free numerous codons for simultaneous encoding of multiple distinct unnatural monomers, and to experimentally explore alternate genetic codes; 5) To explore the evolution of encoded unnatural polymers toward new cellular functions.
Max ERC Funding
1 782 918 €
Duration
Start date: 2009-01-01, End date: 2014-12-31
Project acronym EPI-Centrd
Project Epilepsy Controlled with Electronic Neurotransmitter Delivery
Researcher (PI) Adam WILLIAMSON
Host Institution (HI) UNIVERSITE D'AIX MARSEILLE
Call Details Starting Grant (StG), LS7, ERC-2016-STG
Summary Many efficient drugs have been designed to treat neurological disorders, but have failed in the clinic because they were toxic, could not cross the blood-brain barrier, and/or had deleterious side effects in healthy regions. I propose a conceptual breakthrough to solve these three issues, with minimally-invasive organic electronic ion pumps (OEIPs) to provide targeted treatment where and when it is needed. I will use epilepsy as the disease model because of its high rate of drug-resistance (30%) and will offer concrete opportunities for clinical transfer of such state-of-the-art technology.
The clinical problem: Resective surgery is frequently the last option available to a patient with drug-resistant epilepsy (> 1 million persons in the EU). However, surgery fails in 30% of the cases and can have deleterious consequences with severe postoperative neurological deficits (impaired motor function, speech and memory). Furthermore, some cases of epilepsy are simply untreatable surgically because resective surgery would leave unacceptable damage to core functions. Clearly, a new therapeutic approach is needed when neurosurgery is not possible or deemed too risky.
The OEIP solution: As I have demonstrated, OEIPs combine state-of-the-art organic electronics and pharmacology to control epileptiform activity in vitro by directly delivering inhibitory neurotransmitters on-demand. I additionally demonstrated that thin-film flexible organic electronics can be used to create minimally-invasive depth probes for implantation which significantly reduced tissue damage compared to standard rigid implants in vivo. I will integrate OEIPs on such probes creating devices which will have both the high-quality recordings provided by the organic electrodes for electrophysiological seizure detection and the molecular delivery capability of the OEIP for seizure intervention. The devices will be a closed-loop system to detect seizure onset and intervene in the affected brain region.
Summary
Many efficient drugs have been designed to treat neurological disorders, but have failed in the clinic because they were toxic, could not cross the blood-brain barrier, and/or had deleterious side effects in healthy regions. I propose a conceptual breakthrough to solve these three issues, with minimally-invasive organic electronic ion pumps (OEIPs) to provide targeted treatment where and when it is needed. I will use epilepsy as the disease model because of its high rate of drug-resistance (30%) and will offer concrete opportunities for clinical transfer of such state-of-the-art technology.
The clinical problem: Resective surgery is frequently the last option available to a patient with drug-resistant epilepsy (> 1 million persons in the EU). However, surgery fails in 30% of the cases and can have deleterious consequences with severe postoperative neurological deficits (impaired motor function, speech and memory). Furthermore, some cases of epilepsy are simply untreatable surgically because resective surgery would leave unacceptable damage to core functions. Clearly, a new therapeutic approach is needed when neurosurgery is not possible or deemed too risky.
The OEIP solution: As I have demonstrated, OEIPs combine state-of-the-art organic electronics and pharmacology to control epileptiform activity in vitro by directly delivering inhibitory neurotransmitters on-demand. I additionally demonstrated that thin-film flexible organic electronics can be used to create minimally-invasive depth probes for implantation which significantly reduced tissue damage compared to standard rigid implants in vivo. I will integrate OEIPs on such probes creating devices which will have both the high-quality recordings provided by the organic electrodes for electrophysiological seizure detection and the molecular delivery capability of the OEIP for seizure intervention. The devices will be a closed-loop system to detect seizure onset and intervene in the affected brain region.
Max ERC Funding
1 636 250 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym EPIC
Project Evolving Program Improvement Collaborators
Researcher (PI) Mark HARMAN
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary EPIC will automatically construct Evolutionary Program Improvement Collaborators (called Epi-Collaborators) that suggest code changes that improve software according to multiple functional and non-functional objectives. The Epi-Collaborator suggestions will include transplantation of code from a donor system to a host, grafting of entirely new features `grown' (evolved) by the Epi-Collaborator, and identification and optimisation of tuneable `deep' parameters (that were previously unexposed and therefore unexploited).
A key feature of the EPIC approach is that all of these suggestions will be underpinned by automatically-constructed quantitative evidence that justifies, explains and documents improvements. EPIC aims to introduce a new way of developing software, as a collaboration between human and machine, exploiting the complementary strengths of each; the human has domain and contextual insights, while the machine has the ability to intelligently search large search spaces. The EPIC approach directly tackles the emergent challenges of multiplicity: optimising for multiple competing and conflicting objectives and platforms with multiple software versions.
Keywords:
Search Based Software Engineering (SBSE),
Evolutionary Computing,
Software Testing,
Genetic Algorithms,
Genetic Programming.
Summary
EPIC will automatically construct Evolutionary Program Improvement Collaborators (called Epi-Collaborators) that suggest code changes that improve software according to multiple functional and non-functional objectives. The Epi-Collaborator suggestions will include transplantation of code from a donor system to a host, grafting of entirely new features `grown' (evolved) by the Epi-Collaborator, and identification and optimisation of tuneable `deep' parameters (that were previously unexposed and therefore unexploited).
A key feature of the EPIC approach is that all of these suggestions will be underpinned by automatically-constructed quantitative evidence that justifies, explains and documents improvements. EPIC aims to introduce a new way of developing software, as a collaboration between human and machine, exploiting the complementary strengths of each; the human has domain and contextual insights, while the machine has the ability to intelligently search large search spaces. The EPIC approach directly tackles the emergent challenges of multiplicity: optimising for multiple competing and conflicting objectives and platforms with multiple software versions.
Keywords:
Search Based Software Engineering (SBSE),
Evolutionary Computing,
Software Testing,
Genetic Algorithms,
Genetic Programming.
Max ERC Funding
2 159 035 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym EPISUSCEPTIBILITY
Project Epigenome and Cancer Susceptibility
Researcher (PI) Päivi Tuulikki Peltomäki
Host Institution (HI) HELSINGIN YLIOPISTO
Call Details Advanced Grant (AdG), LS7, ERC-2008-AdG
Summary Early detection is crucial for the outcome of most cancers. Prevention of cancer development is even more desirable. To facilitate these ultimate goals we aim to construct a comprehensive view of the stepwise process through which common human cancers, such as colorectal cancer, arise. In particular, we aim to identify novel mechanisms of cancer susceptibility by focusing on the epigenome, whose alterations may underlie several phenomena related to chronic adult-onset disease that are not explained by genetics alone. The stepwise process of carcinogenesis can be accelerated or halted for various reasons, including inherited susceptibility and diet. The human multi-organ cancer syndromes hereditary nonpolyposis colorectal cancer (HNPCC) and familial adenomatous polyposis (FAP) as well as their murine counterparts, the Mlh1+/- mouse and the ApcMin/+ mouse, will be used as shortcuts to study the interplay between the epigenome and genome in tumorigenesis and to identify biomarkers of cancer susceptibility, malignant transformation, and tumor progression. This will be achieved by molecular profiling of normal and tumor tissues, cell line studies, in vitro functional assays, and in silico approaches. Additionally, the role that the epigenome plays to mediate the effects of the Western type diet on colorectal tumorigenesis will be examined in the mouse. Unlike genetic changes, epigenetic alterations are potentially reversible, which makes them promising targets for preventive and therapeutic interventions.
Summary
Early detection is crucial for the outcome of most cancers. Prevention of cancer development is even more desirable. To facilitate these ultimate goals we aim to construct a comprehensive view of the stepwise process through which common human cancers, such as colorectal cancer, arise. In particular, we aim to identify novel mechanisms of cancer susceptibility by focusing on the epigenome, whose alterations may underlie several phenomena related to chronic adult-onset disease that are not explained by genetics alone. The stepwise process of carcinogenesis can be accelerated or halted for various reasons, including inherited susceptibility and diet. The human multi-organ cancer syndromes hereditary nonpolyposis colorectal cancer (HNPCC) and familial adenomatous polyposis (FAP) as well as their murine counterparts, the Mlh1+/- mouse and the ApcMin/+ mouse, will be used as shortcuts to study the interplay between the epigenome and genome in tumorigenesis and to identify biomarkers of cancer susceptibility, malignant transformation, and tumor progression. This will be achieved by molecular profiling of normal and tumor tissues, cell line studies, in vitro functional assays, and in silico approaches. Additionally, the role that the epigenome plays to mediate the effects of the Western type diet on colorectal tumorigenesis will be examined in the mouse. Unlike genetic changes, epigenetic alterations are potentially reversible, which makes them promising targets for preventive and therapeutic interventions.
Max ERC Funding
2 500 000 €
Duration
Start date: 2009-04-01, End date: 2014-09-30
Project acronym EVICARE
Project Extracellular Vesicle-Inspired CArdiac Repair
Researcher (PI) Joseph Petrus Gerardus SLUIJTER
Host Institution (HI) UNIVERSITAIR MEDISCH CENTRUM UTRECHT
Call Details Consolidator Grant (CoG), LS7, ERC-2016-COG
Summary More than 3.5 million people are newly diagnosed with heart failure every year in Europe with a long-term prognosis of 50% mortality within 4 years. There is a major need for more innovative, regenerative therapies that have the potential to change the course of disease. My hypothesis is that we can recondition heart failure by stimulating cardiac repair with extracellular vesicles that are derived from progenitor cells. In my laboratory, extracellular released vesicles containing a cocktail of stimulating factors, are amongst the most potent vectors for cardiac repair.
To achieve a sustainable and long-term therapeutic effect of these vesicles and enhance cardiac function by stimulating myocardial repair, we will 1) improve local cardiac delivery of progenitor cell-derived extracellular vesicles, 2) understand the mechanism of action of extracellular vesicles, and 3) stimulate extracellular vesicles release and/or production by progenitor cells.
These questions form the rationale for the current proposal in which we will co-inject extracellular vesicles and slow-release biomaterials into the damaged myocardium. By subsequent genetic tracing, we will determine fate mapping of injected vesicles in vivo, and perform further mechanistic understanding in in vitro culture models of targeted and identified myocardial cell types. Moreover, we will upscale the vesicles production by progenitor cells further via bioreactor culturing and medium-throughput screening on factors that stimulate vesicles release.
The use of stem cell-derived extracellular vesicles to stimulate cardiac repair will potentially allow for an off-the shelf approach, including mechanistic understanding and future clinical use. Additionally, since these vesicles act as a natural carrier system outperforming current artificial drug delivery, we might understand and mimic their characteristics to enhance local (RNA-based) drug delivery systems for cardiovascular application.
Summary
More than 3.5 million people are newly diagnosed with heart failure every year in Europe with a long-term prognosis of 50% mortality within 4 years. There is a major need for more innovative, regenerative therapies that have the potential to change the course of disease. My hypothesis is that we can recondition heart failure by stimulating cardiac repair with extracellular vesicles that are derived from progenitor cells. In my laboratory, extracellular released vesicles containing a cocktail of stimulating factors, are amongst the most potent vectors for cardiac repair.
To achieve a sustainable and long-term therapeutic effect of these vesicles and enhance cardiac function by stimulating myocardial repair, we will 1) improve local cardiac delivery of progenitor cell-derived extracellular vesicles, 2) understand the mechanism of action of extracellular vesicles, and 3) stimulate extracellular vesicles release and/or production by progenitor cells.
These questions form the rationale for the current proposal in which we will co-inject extracellular vesicles and slow-release biomaterials into the damaged myocardium. By subsequent genetic tracing, we will determine fate mapping of injected vesicles in vivo, and perform further mechanistic understanding in in vitro culture models of targeted and identified myocardial cell types. Moreover, we will upscale the vesicles production by progenitor cells further via bioreactor culturing and medium-throughput screening on factors that stimulate vesicles release.
The use of stem cell-derived extracellular vesicles to stimulate cardiac repair will potentially allow for an off-the shelf approach, including mechanistic understanding and future clinical use. Additionally, since these vesicles act as a natural carrier system outperforming current artificial drug delivery, we might understand and mimic their characteristics to enhance local (RNA-based) drug delivery systems for cardiovascular application.
Max ERC Funding
1 997 298 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym EVOLVE
Project Extracellular Vesicle-Internalizing Receptors (EVIRs) for Cancer ImmunoGeneTherapy
Researcher (PI) Michele DE PALMA
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Consolidator Grant (CoG), LS7, ERC-2016-COG
Summary We are witnessing transformative results in the clinical application of both cancer immunotherapies and gene transfer
technologies. Tumor vaccines are a specific modality of cancer immunotherapy. Similar to vaccination against pathogens, tumor vaccines are designed to elicit a specific immune response against cancer. They are based on the administration of inactivated cancer cells or tumor antigens, or the inoculation of antigen-presenting cells (APCs) previously exposed to tumor antigens. In spite of significant development and testing, tumor vaccines have largely delivered unsatisfactory clinical results. Indeed, while some patients show dramatic and durable cancer regressions, many do not respond, highlighting both the potential and the shortcomings of current vaccination strategies. Hence, identifying and abating the barriers to effective cancer vaccines is key to broadening their therapeutic reach. The goal of EVOLVE (EVirs to Optimize and Leverage Vaccines for cancer Eradication) is to propel the development of effective APC-based tumor vaccines using an innovative strategy that overcomes several key hurdles associated with available treatments. EVOLVE puts forward a novel APC engineering platform whereby chimeric receptors are used to both enable the specific and efficient uptake of cancer-derived extracellular vesicles (EVs) into APCs, and to promote the cross-presentation of EV-associated tumor antigens for stimulating anti-tumor immunity. EVOLVE also envisions a combination of ancillary ‘outside of the box’ interventions, primarily based on further APC engineering combined with innovative pre-conditioning of the tumor microenvironment, to facilitate the deployment of effective APC-driven, T-cellmediated anti-tumor immunity. Further to preclinical trials in mouse models of breast cancer and melanoma, our APC platform will be used to prospectively identify novel human melanoma antigens and reactive T cell clones for broader immunotherapy applications.
Summary
We are witnessing transformative results in the clinical application of both cancer immunotherapies and gene transfer
technologies. Tumor vaccines are a specific modality of cancer immunotherapy. Similar to vaccination against pathogens, tumor vaccines are designed to elicit a specific immune response against cancer. They are based on the administration of inactivated cancer cells or tumor antigens, or the inoculation of antigen-presenting cells (APCs) previously exposed to tumor antigens. In spite of significant development and testing, tumor vaccines have largely delivered unsatisfactory clinical results. Indeed, while some patients show dramatic and durable cancer regressions, many do not respond, highlighting both the potential and the shortcomings of current vaccination strategies. Hence, identifying and abating the barriers to effective cancer vaccines is key to broadening their therapeutic reach. The goal of EVOLVE (EVirs to Optimize and Leverage Vaccines for cancer Eradication) is to propel the development of effective APC-based tumor vaccines using an innovative strategy that overcomes several key hurdles associated with available treatments. EVOLVE puts forward a novel APC engineering platform whereby chimeric receptors are used to both enable the specific and efficient uptake of cancer-derived extracellular vesicles (EVs) into APCs, and to promote the cross-presentation of EV-associated tumor antigens for stimulating anti-tumor immunity. EVOLVE also envisions a combination of ancillary ‘outside of the box’ interventions, primarily based on further APC engineering combined with innovative pre-conditioning of the tumor microenvironment, to facilitate the deployment of effective APC-driven, T-cellmediated anti-tumor immunity. Further to preclinical trials in mouse models of breast cancer and melanoma, our APC platform will be used to prospectively identify novel human melanoma antigens and reactive T cell clones for broader immunotherapy applications.
Max ERC Funding
1 958 919 €
Duration
Start date: 2017-07-01, End date: 2022-06-30
Project acronym EyeCode
Project Perceptual encoding of high fidelity light fields
Researcher (PI) Rafal Konrad MANTIUK
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary One of the grand challenges of computer graphics has been to generate images indistinguishable from photographs for a naïve observer. As this challenge is mostly completed and computer generated imagery starts to replace photographs (product catalogues, special effects in cinema), the next grand challenge is to produce imagery that is indistinguishable from the real-world.
Tremendous progress in capture, manipulation and display technologies opens the potential to achieve this new challenge (at the research stage) in the next 5-10 years. Electronic displays offer sufficient resolution, frame rate, dynamic range, colour gamut and, in some configurations, can produce binocular and focal depth cues. However, most of the work done in this area ignores or does not sufficiently address one of the key aspects of this problem - the performance and limitations of the human visual system.
The objective of this project is to characterise and model the performance and limitations of the human visual system when observing complex dynamic 3D scenes. The scene will span a high dynamic range (HDR) of luminance and provide binocular and focal depth cues. In technical terms, the project aims to create a visual model and difference metric for high dynamic range light fields (HDR-LFs). The visual metric will replace tedious subjective testing and provide the first automated method that can optimize encoding and processing of HDR-LF data.
Perceptually realistic video will impose enormous storage and processing requirements compared to traditional video. The bandwidth of such rich visual content will be the main bottleneck for new imaging and display technologies. Therefore, the final objective of this project is to use the new visual metric to derive an efficient and approximately perceptually uniform encoding of HDR-LFs. Such encoding will radically reduce storage and bandwidth requirements and will pave the way for future highly realistic image and video content.
Summary
One of the grand challenges of computer graphics has been to generate images indistinguishable from photographs for a naïve observer. As this challenge is mostly completed and computer generated imagery starts to replace photographs (product catalogues, special effects in cinema), the next grand challenge is to produce imagery that is indistinguishable from the real-world.
Tremendous progress in capture, manipulation and display technologies opens the potential to achieve this new challenge (at the research stage) in the next 5-10 years. Electronic displays offer sufficient resolution, frame rate, dynamic range, colour gamut and, in some configurations, can produce binocular and focal depth cues. However, most of the work done in this area ignores or does not sufficiently address one of the key aspects of this problem - the performance and limitations of the human visual system.
The objective of this project is to characterise and model the performance and limitations of the human visual system when observing complex dynamic 3D scenes. The scene will span a high dynamic range (HDR) of luminance and provide binocular and focal depth cues. In technical terms, the project aims to create a visual model and difference metric for high dynamic range light fields (HDR-LFs). The visual metric will replace tedious subjective testing and provide the first automated method that can optimize encoding and processing of HDR-LF data.
Perceptually realistic video will impose enormous storage and processing requirements compared to traditional video. The bandwidth of such rich visual content will be the main bottleneck for new imaging and display technologies. Therefore, the final objective of this project is to use the new visual metric to derive an efficient and approximately perceptually uniform encoding of HDR-LFs. Such encoding will radically reduce storage and bandwidth requirements and will pave the way for future highly realistic image and video content.
Max ERC Funding
1 868 855 €
Duration
Start date: 2017-07-01, End date: 2022-06-30
Project acronym FAFC
Project Foundations and Applications of Functional Cryptography
Researcher (PI) Gil SEGEV
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary "Modern cryptography has successfully followed an ""all-or-nothing"" design paradigm over the years. For example, the most fundamental task of data encryption requires that encrypted data be fully recoverable using the encryption key, but be completely useless without it. Nowadays, however, this paradigm is insufficient for a wide variety of evolving applications, and a more subtle approach is urgently needed. This has recently motivated the cryptography community to put forward a vision of ""functional cryptography'': Designing cryptographic primitives that allow fine-grained access to sensitive data.
This proposal aims at making substantial progress towards realizing the premise of functional cryptography. By tackling challenging key problems in both the foundations and the applications of functional cryptography, I plan to direct the majority of our effort towards addressing the following three fundamental objectives, which span a broad and interdisciplinary flavor of research directions: (1) Obtain a better understanding of functional cryptography's building blocks, (2) develop functional cryptographic tools and schemes based on well-studied assumptions, and (3) increase the usability of functional cryptographic systems via algorithmic techniques.
Realizing the premise of functional cryptography is of utmost importance not only to the development of modern cryptography, but in fact to our entire technological development, where fine-grained access to sensitive data plays an instrumental role. Moreover, our objectives are tightly related to two of the most fundamental open problems in cryptography: Basing cryptography on widely-believed worst-case complexity assumptions, and basing public-key cryptography on private-key primitives. I strongly believe that meaningful progress towards achieving our objectives will shed new light on these key problems, and thus have a significant impact on our understanding of modern cryptography."
Summary
"Modern cryptography has successfully followed an ""all-or-nothing"" design paradigm over the years. For example, the most fundamental task of data encryption requires that encrypted data be fully recoverable using the encryption key, but be completely useless without it. Nowadays, however, this paradigm is insufficient for a wide variety of evolving applications, and a more subtle approach is urgently needed. This has recently motivated the cryptography community to put forward a vision of ""functional cryptography'': Designing cryptographic primitives that allow fine-grained access to sensitive data.
This proposal aims at making substantial progress towards realizing the premise of functional cryptography. By tackling challenging key problems in both the foundations and the applications of functional cryptography, I plan to direct the majority of our effort towards addressing the following three fundamental objectives, which span a broad and interdisciplinary flavor of research directions: (1) Obtain a better understanding of functional cryptography's building blocks, (2) develop functional cryptographic tools and schemes based on well-studied assumptions, and (3) increase the usability of functional cryptographic systems via algorithmic techniques.
Realizing the premise of functional cryptography is of utmost importance not only to the development of modern cryptography, but in fact to our entire technological development, where fine-grained access to sensitive data plays an instrumental role. Moreover, our objectives are tightly related to two of the most fundamental open problems in cryptography: Basing cryptography on widely-believed worst-case complexity assumptions, and basing public-key cryptography on private-key primitives. I strongly believe that meaningful progress towards achieving our objectives will shed new light on these key problems, and thus have a significant impact on our understanding of modern cryptography."
Max ERC Funding
1 307 188 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym FLOVIST
Project Flow visualization inspired aero-acoustics with time-resolved Tomographic Particle Image Velocimetry
Researcher (PI) Fulvio Scarano
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary "The recent developments of the Tomographic Particle Image Velocimetry technique and of the non-intrusive pressure field characterization method, by the applicant at TU Delft Aerospace Engineering, now opens unforeseen perspectives in the area of unsteady flow diagnostics and experimental aero-acoustics. As a result of this work it is now possible not only to quantify complex flows in their three-dimensional structure, but also to extract quantities such as pressure. The current research proposal aims at the development of an innovative approach to experimental aero-acoustics and flow control making use of the recently developed Tomographic-PIV technique. The objective is to fully describe and quantify the flow pattern and the related acoustic source term at its origin, which is of paramount importance to understand and control the processes like acoustic noise production and flow separation dominating aerodynamic drag. This is relevant for the improvement of aircrafts design as far as drag reduction and noise emission is related and should enable the development of ""greener"" aircrafts for a sustainable growth of aviation in populated areas, in harmony with the technology innovation policy in Europe (7th Framework Programme) and TU Delft sustainable development focus (CleanEra, Cost-Effective Low emission And Noise Efficient regional Aircraft) at Aerospace Engineering. To achieve this step it is required that such new-generation diagnostic approach by the Tomo-PIV technique is further developed into a quadri-dimensional measurement tool (4D-PIV), enabling to extract the relevant acoustic information from the experimental observation invoking the aeroacoustic analogies. A wide industrial and academic network (DLR, AIRBUS, DNW, NLR, LaVision, EWA, JMBC Burgerscentrum) developed in recent years is available to exploit the results of the proposed activity."
Summary
"The recent developments of the Tomographic Particle Image Velocimetry technique and of the non-intrusive pressure field characterization method, by the applicant at TU Delft Aerospace Engineering, now opens unforeseen perspectives in the area of unsteady flow diagnostics and experimental aero-acoustics. As a result of this work it is now possible not only to quantify complex flows in their three-dimensional structure, but also to extract quantities such as pressure. The current research proposal aims at the development of an innovative approach to experimental aero-acoustics and flow control making use of the recently developed Tomographic-PIV technique. The objective is to fully describe and quantify the flow pattern and the related acoustic source term at its origin, which is of paramount importance to understand and control the processes like acoustic noise production and flow separation dominating aerodynamic drag. This is relevant for the improvement of aircrafts design as far as drag reduction and noise emission is related and should enable the development of ""greener"" aircrafts for a sustainable growth of aviation in populated areas, in harmony with the technology innovation policy in Europe (7th Framework Programme) and TU Delft sustainable development focus (CleanEra, Cost-Effective Low emission And Noise Efficient regional Aircraft) at Aerospace Engineering. To achieve this step it is required that such new-generation diagnostic approach by the Tomo-PIV technique is further developed into a quadri-dimensional measurement tool (4D-PIV), enabling to extract the relevant acoustic information from the experimental observation invoking the aeroacoustic analogies. A wide industrial and academic network (DLR, AIRBUS, DNW, NLR, LaVision, EWA, JMBC Burgerscentrum) developed in recent years is available to exploit the results of the proposed activity."
Max ERC Funding
1 498 000 €
Duration
Start date: 2008-08-01, End date: 2013-07-31
Project acronym FRAGMENT2DRUG
Project Jigsaw puzzles at atomic resolution: Computational design of GPCR drugs from fragments
Researcher (PI) Jens CARLSSON
Host Institution (HI) UPPSALA UNIVERSITET
Call Details Starting Grant (StG), LS7, ERC-2016-STG
Summary Despite technological advances, industry struggles to develop new pharmaceuticals and therefore novel strategies for drug discovery are urgently needed. G protein-coupled receptors (GPCRs) play important roles in numerous physiological processes and are important drug targets for neurological diseases. My research focuses on modelling of GPCR-ligand interactions at the atomic level, with the goal to increase knowledge of receptor function and develop new methods for drug discovery. Breakthroughs in GPCR structural biology and access to sensitive screening assays provide opportunities to utilize fragment-based lead discovery (FBLD), a powerful approach for drug design. The objective of the project is to create a computational platform for FBLD, with a vision to transform the early drug discovery process for GPCRs. As structural information for these targets is limited, predictive models of receptor-fragment complexes will be crucial for the successful use of FBLD. In this project, computational structure-based methods for discovery of fragment ligands and further optimization of these to potent leads will be developed. These techniques will be applied to address two difficult problems in drug discovery. The first of these is to design ligands of peptide-binding GPCRs that have been challenging for existing methods. One of the promises of FBLD is to provide access to difficult targets, which will be explored by combining molecular docking and biophysical screening against peptide-GPCRs to identify novel lead candidates. A second challenge is that efficient treatment of neurological disorders often requires modulation of multiple targets, which also will be the focus of the project.
Summary
Despite technological advances, industry struggles to develop new pharmaceuticals and therefore novel strategies for drug discovery are urgently needed. G protein-coupled receptors (GPCRs) play important roles in numerous physiological processes and are important drug targets for neurological diseases. My research focuses on modelling of GPCR-ligand interactions at the atomic level, with the goal to increase knowledge of receptor function and develop new methods for drug discovery. Breakthroughs in GPCR structural biology and access to sensitive screening assays provide opportunities to utilize fragment-based lead discovery (FBLD), a powerful approach for drug design. The objective of the project is to create a computational platform for FBLD, with a vision to transform the early drug discovery process for GPCRs. As structural information for these targets is limited, predictive models of receptor-fragment complexes will be crucial for the successful use of FBLD. In this project, computational structure-based methods for discovery of fragment ligands and further optimization of these to potent leads will be developed. These techniques will be applied to address two difficult problems in drug discovery. The first of these is to design ligands of peptide-binding GPCRs that have been challenging for existing methods. One of the promises of FBLD is to provide access to difficult targets, which will be explored by combining molecular docking and biophysical screening against peptide-GPCRs to identify novel lead candidates. A second challenge is that efficient treatment of neurological disorders often requires modulation of multiple targets, which also will be the focus of the project.
Max ERC Funding
1 467 500 €
Duration
Start date: 2017-11-01, End date: 2022-10-31
Project acronym FTMEMS
Project Fiber-top micromachined devices: ideas on the tip of a fiber
Researcher (PI) Davide Iannuzzi
Host Institution (HI) STICHTING VU
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary Fiber-top sensors (D. Iannuzzi et al., patent application number PCT/NL2005/000816) are a new generation of miniaturized devices obtained by carving tiny movable structures directly on the cleaved edge of an optical fiber. The light coupled into the fiber allows measurements of the position of the micromechanical parts with sub-nanometer accuracy. The monolithic structure of the device, the absence of electronic contacts on the sensing head, and the simplicity of the working principle offer unprecedented opportunities for the development of scientific instruments for applications in and outside research laboratories. For example, a fiber-top scanning probe microscope (also in the form of a PenFM, where a fiber-top atomic force microscope would be incorporated in a pen-like stylus) could be routinely used in harsh environments and could be easily handled by untrained personnel or through remote control systems – a fascinating perspective for utilization, among others, in surgery rooms and space missions. Similarly, the development of fiber-top biochemical sensors could be exploited for the implementation of portable equipment for in vivo and Point of Care medical testing. Fiber-top sensors could be used for the measurement of parameters of medical relevance in interstitial fluid or in blood – an interesting opportunity for intensive care monitoring and early detection of life-threatening diseases. This scenario calls for a coordinated research program dedicated to this novel generation of devices. It is my intention to forge a laboratory gravitating around fiber-top technology. My group will have the opportunity to pioneer this research area and to become the reference point in the field, on the forefront of an emerging subject that might represent a major breakthrough in the future development of micromachined sensors.
Summary
Fiber-top sensors (D. Iannuzzi et al., patent application number PCT/NL2005/000816) are a new generation of miniaturized devices obtained by carving tiny movable structures directly on the cleaved edge of an optical fiber. The light coupled into the fiber allows measurements of the position of the micromechanical parts with sub-nanometer accuracy. The monolithic structure of the device, the absence of electronic contacts on the sensing head, and the simplicity of the working principle offer unprecedented opportunities for the development of scientific instruments for applications in and outside research laboratories. For example, a fiber-top scanning probe microscope (also in the form of a PenFM, where a fiber-top atomic force microscope would be incorporated in a pen-like stylus) could be routinely used in harsh environments and could be easily handled by untrained personnel or through remote control systems – a fascinating perspective for utilization, among others, in surgery rooms and space missions. Similarly, the development of fiber-top biochemical sensors could be exploited for the implementation of portable equipment for in vivo and Point of Care medical testing. Fiber-top sensors could be used for the measurement of parameters of medical relevance in interstitial fluid or in blood – an interesting opportunity for intensive care monitoring and early detection of life-threatening diseases. This scenario calls for a coordinated research program dedicated to this novel generation of devices. It is my intention to forge a laboratory gravitating around fiber-top technology. My group will have the opportunity to pioneer this research area and to become the reference point in the field, on the forefront of an emerging subject that might represent a major breakthrough in the future development of micromachined sensors.
Max ERC Funding
1 799 915 €
Duration
Start date: 2008-06-01, End date: 2013-05-31
Project acronym HEPASPHER
Project Mimicking liver disease and regeneration in vitro for drug development and liver transplantation
Researcher (PI) Magnus INGELMAN-SUNDBERG
Host Institution (HI) KAROLINSKA INSTITUTET
Call Details Advanced Grant (AdG), LS7, ERC-2016-ADG
Summary The liver is a vital organ for synthesis and detoxification. The most significant liver diseases are hepatitis, non alcoholic fatty liver disease (NAFLD), non-alcoholic fatty liver steatohepatitis (NASH), carcinoma and cirrhosis. An additional and important cause of liver injury is adverse drug reactions (ADRs). In particular NAFLD is the most common liver disease affecting between 20% and 44% of European adults and 43-70% of patients with type 2 diabetes, and is one prime cause for chronic and end-stage liver disease, such as cirrhosis and primary hepatocellular carcinoma.
This proposal is based on recent findings in the laboratory: The development of novel 3D spheroid system with chemically defined media allowing studies of chronic drug toxicity, relevant liver disease and liver function for 5 weeks in vitro, the finding of the role of miRNA in hepatocyte dedifferentiation and that hepatocytes during spheroid formation first de-differentiate but later in spheroids re-differentiate to an in vivo relevant phenotype. This forms the basis for the main objectives: i) to study diseased liver in vitro with identification of mechanisms, biomarkers and novel drug candidates for treatment of NAFLD and fibrosis, ii) evaluate drug toxicity sensitivity and mechanisms in diseased liver systems and iii) further develop methods for hepatocyte proliferation and regeneration in vitro for transplantation purposes, including genetic editing in cases of hepatocytes obtained from patients with genetically inherited liver diseases.
This work is carried out in close contact with the Hepatology unit at the Karolinska Hospital partly using resources at the Science for Life Laboratory at Karolinska. It is anticipated that the project can provide with novel mechanisms, biomarkers and new targets for treatment of liver disease as well as novel methods for clinically applicable liver regeneration without the use of stem cells or transformed cells.
Summary
The liver is a vital organ for synthesis and detoxification. The most significant liver diseases are hepatitis, non alcoholic fatty liver disease (NAFLD), non-alcoholic fatty liver steatohepatitis (NASH), carcinoma and cirrhosis. An additional and important cause of liver injury is adverse drug reactions (ADRs). In particular NAFLD is the most common liver disease affecting between 20% and 44% of European adults and 43-70% of patients with type 2 diabetes, and is one prime cause for chronic and end-stage liver disease, such as cirrhosis and primary hepatocellular carcinoma.
This proposal is based on recent findings in the laboratory: The development of novel 3D spheroid system with chemically defined media allowing studies of chronic drug toxicity, relevant liver disease and liver function for 5 weeks in vitro, the finding of the role of miRNA in hepatocyte dedifferentiation and that hepatocytes during spheroid formation first de-differentiate but later in spheroids re-differentiate to an in vivo relevant phenotype. This forms the basis for the main objectives: i) to study diseased liver in vitro with identification of mechanisms, biomarkers and novel drug candidates for treatment of NAFLD and fibrosis, ii) evaluate drug toxicity sensitivity and mechanisms in diseased liver systems and iii) further develop methods for hepatocyte proliferation and regeneration in vitro for transplantation purposes, including genetic editing in cases of hepatocytes obtained from patients with genetically inherited liver diseases.
This work is carried out in close contact with the Hepatology unit at the Karolinska Hospital partly using resources at the Science for Life Laboratory at Karolinska. It is anticipated that the project can provide with novel mechanisms, biomarkers and new targets for treatment of liver disease as well as novel methods for clinically applicable liver regeneration without the use of stem cells or transformed cells.
Max ERC Funding
2 413 449 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym HEPCENT
Project Molecular Analysis of Hepatitis C Virus Neutralization and Entry For the Development of Novel Antiviral Immunopreventive Strategies
Researcher (PI) François-Loic Cosset
Host Institution (HI) INSTITUT NATIONAL DE LA SANTE ET DE LA RECHERCHE MEDICALE
Call Details Advanced Grant (AdG), LS7, ERC-2008-AdG
Summary Hepatitis C virus (HCV) infection is a leading cause of chronic liver disease world-wide. HCV-induced end-stage liver disease such as liver cirrhosis and hepatocellular carcinoma represent a major concern in global health. Treatment options for chronic hepatitis C are limited and no vaccine against HCV infection is available. Vaccine development is hampered by several obstacles. High viral variability and escape from host immune responses render antigen selection a major challenge. Antigen selection requires thorough studies to identify conserved T cell and neutralization epitopes and to decipher neutralization mechanisms, aiming to discover the optimal viral target for immune responses counteracting HCV escape strategies. At the same time it is important to develop antigen presentation systems that are efficient in patients with impaired antiviral immune responses, as often observed during chronic hepatitis C. While most vaccine development programs are based on improving HCV cellular immunity, it is essential to associate, in a same vaccine formulation, immunogens able to induce broad spectrums neutralizing and cellular responses. Owing to recent progresses in the field, here we propose a project aiming to overcome the current limitations in vaccine development by addressing the improvement of B cell responses targeting HCV infection. This will be achieved by a detailed investigation of: 1) mechanisms of antibody-mediated neutralization and escape, 2) impact of lipoproteins associating with the viral particle during assembly/release and counteracting neutralization and 3) cell entry steps that can potentially be targeted by antibodies, including those that are not induced naturally. Thus, through the combined expertise of the team in molecular virology, immunology, clinical hepatology and vectorology, we aim to rationalize the development of B cell immunogens and neutralizing antibodies for novel antiviral immunopreventive strategies targeting HCV infection.
Summary
Hepatitis C virus (HCV) infection is a leading cause of chronic liver disease world-wide. HCV-induced end-stage liver disease such as liver cirrhosis and hepatocellular carcinoma represent a major concern in global health. Treatment options for chronic hepatitis C are limited and no vaccine against HCV infection is available. Vaccine development is hampered by several obstacles. High viral variability and escape from host immune responses render antigen selection a major challenge. Antigen selection requires thorough studies to identify conserved T cell and neutralization epitopes and to decipher neutralization mechanisms, aiming to discover the optimal viral target for immune responses counteracting HCV escape strategies. At the same time it is important to develop antigen presentation systems that are efficient in patients with impaired antiviral immune responses, as often observed during chronic hepatitis C. While most vaccine development programs are based on improving HCV cellular immunity, it is essential to associate, in a same vaccine formulation, immunogens able to induce broad spectrums neutralizing and cellular responses. Owing to recent progresses in the field, here we propose a project aiming to overcome the current limitations in vaccine development by addressing the improvement of B cell responses targeting HCV infection. This will be achieved by a detailed investigation of: 1) mechanisms of antibody-mediated neutralization and escape, 2) impact of lipoproteins associating with the viral particle during assembly/release and counteracting neutralization and 3) cell entry steps that can potentially be targeted by antibodies, including those that are not induced naturally. Thus, through the combined expertise of the team in molecular virology, immunology, clinical hepatology and vectorology, we aim to rationalize the development of B cell immunogens and neutralizing antibodies for novel antiviral immunopreventive strategies targeting HCV infection.
Max ERC Funding
2 447 357 €
Duration
Start date: 2009-04-01, End date: 2014-12-31
Project acronym HIP-LAB
Project High-throughput integrated photonic lab-on-a-DVD platforms
Researcher (PI) Andreu Llobera
Host Institution (HI) AGENCIA ESTATAL CONSEJO SUPERIOR DEINVESTIGACIONES CIENTIFICAS
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary The main aim of the proposed research line is to develop high-throughput highly sensitive photonic lab-a-DVD platforms for multiple parallel analysis with an extremely high degree of integration. The already existing high-throughput platforms only use the CD platform as a substrate, without any given functionality, conversely, in this research line, in the DVD platform it is proposed the integration of the following elements: (i) polymeric photonic components (high-sensitivity Mach-Zehnder interferometers, diffraction gratings and hollow prisms). (ii) polymeric microfluidics (hydrophobic valves and mixers). (iii) Chemical modification of the surface with functional groups prone to interact with the specific analyte and (iv) the necessary information in the DVD tracks to allow the usage of the proposed system in modified DVD readers. Additionally, a new set-up will be mounted, in which a second DVD-header will be incorporated, in such a way that simultaneous high-throughput photonic measurements could be easily performed. Clearly, as compared to the existing platforms, the presented research line requires the establishment of a dynamic multidisciplinary group comprising experts of photonics, microfluidics and (bio)chemistry and the results obtained therein will allow the definition of an advanced photonic high-throughput lab-on-a-DVD platform that will definitely have a large number of application fields, ranging from molecular diagnosis to analytical chemistry or proteomics.
Summary
The main aim of the proposed research line is to develop high-throughput highly sensitive photonic lab-a-DVD platforms for multiple parallel analysis with an extremely high degree of integration. The already existing high-throughput platforms only use the CD platform as a substrate, without any given functionality, conversely, in this research line, in the DVD platform it is proposed the integration of the following elements: (i) polymeric photonic components (high-sensitivity Mach-Zehnder interferometers, diffraction gratings and hollow prisms). (ii) polymeric microfluidics (hydrophobic valves and mixers). (iii) Chemical modification of the surface with functional groups prone to interact with the specific analyte and (iv) the necessary information in the DVD tracks to allow the usage of the proposed system in modified DVD readers. Additionally, a new set-up will be mounted, in which a second DVD-header will be incorporated, in such a way that simultaneous high-throughput photonic measurements could be easily performed. Clearly, as compared to the existing platforms, the presented research line requires the establishment of a dynamic multidisciplinary group comprising experts of photonics, microfluidics and (bio)chemistry and the results obtained therein will allow the definition of an advanced photonic high-throughput lab-on-a-DVD platform that will definitely have a large number of application fields, ranging from molecular diagnosis to analytical chemistry or proteomics.
Max ERC Funding
1 717 200 €
Duration
Start date: 2008-10-01, End date: 2014-09-30
Project acronym iAML-lncTARGET
Project Targeting the transcriptional landscape in infant AML
Researcher (PI) Jan-Henning Cornelius KLUSMANN
Host Institution (HI) MARTIN-LUTHER-UNIVERSITAET HALLE-WITTENBERG
Call Details Starting Grant (StG), LS7, ERC-2016-STG
Summary Infant acute myeloid leukemia (AML) has a dismal prognosis, with a high prevalence of unfavorable features and increased susceptibility to therapy-related toxicities, highlighting the need for innovative treatment approaches. Despite the discovery of an enormous number and diversity of transcriptional products arising from the previously presumed wastelands of the non-protein-coding genome, our knowledge of non-coding RNAs is far from being incorporated into standards of AML diagnosis and treatment. I hypothesize that the highly developmental stage- and cell-specific expression of long non-coding RNAs shapes a chromatin and transcriptional landscape in fetal hematopoietic stem cells that renders them permissive towards transformation. I predict this landscape to synergize with particular oncogenes that are otherwise not oncogenic in adult cells, by providing a fertile transcriptional background for establishing and maintaining oncogenic programs. Therefore, the non-coding transcriptome, inherited from the fetal cell of origin, may reflect a previously unrecognized Achilles heel of infant AML, which I will identify with my expertise to understand and edit the AML genome and transcriptome.
I will apply recent breakthroughs from various research areas to i) create a comprehensive transcriptomic atlas of infant AML and fetal stem cells, ii) define aberrant or fetal stage-specific non-coding RNAs that drive leukemia progression, and iii) resolve their features to probe the oncogenic interactome. After iv) establishing a biobank of patient-derived xenografts, I will v) evaluate preclinical RNA-centered therapeutic interventions to overcome current obstacles in the treatment of infant AML. Targeting the vulnerable fetal stage-specific background of infant AML inherited from the cell of origin may set a paradigm shift for cancer treatment, by focusing on the permissive basis required by the oncogene for inducing and sustaining cancer, rather than on the oncogene itself.
Summary
Infant acute myeloid leukemia (AML) has a dismal prognosis, with a high prevalence of unfavorable features and increased susceptibility to therapy-related toxicities, highlighting the need for innovative treatment approaches. Despite the discovery of an enormous number and diversity of transcriptional products arising from the previously presumed wastelands of the non-protein-coding genome, our knowledge of non-coding RNAs is far from being incorporated into standards of AML diagnosis and treatment. I hypothesize that the highly developmental stage- and cell-specific expression of long non-coding RNAs shapes a chromatin and transcriptional landscape in fetal hematopoietic stem cells that renders them permissive towards transformation. I predict this landscape to synergize with particular oncogenes that are otherwise not oncogenic in adult cells, by providing a fertile transcriptional background for establishing and maintaining oncogenic programs. Therefore, the non-coding transcriptome, inherited from the fetal cell of origin, may reflect a previously unrecognized Achilles heel of infant AML, which I will identify with my expertise to understand and edit the AML genome and transcriptome.
I will apply recent breakthroughs from various research areas to i) create a comprehensive transcriptomic atlas of infant AML and fetal stem cells, ii) define aberrant or fetal stage-specific non-coding RNAs that drive leukemia progression, and iii) resolve their features to probe the oncogenic interactome. After iv) establishing a biobank of patient-derived xenografts, I will v) evaluate preclinical RNA-centered therapeutic interventions to overcome current obstacles in the treatment of infant AML. Targeting the vulnerable fetal stage-specific background of infant AML inherited from the cell of origin may set a paradigm shift for cancer treatment, by focusing on the permissive basis required by the oncogene for inducing and sustaining cancer, rather than on the oncogene itself.
Max ERC Funding
1 499 750 €
Duration
Start date: 2017-06-01, End date: 2022-05-31