Project acronym ALGILE
Project Foundations of Algebraic and Dynamic Data Management Systems
Researcher (PI) Christoph Koch
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary "Contemporary database query languages are ultimately founded on logic and feature an additive operation – usually a form of (multi)set union or disjunction – that is asymmetric in that additions or updates do not always have an inverse. This asymmetry puts a greater part of the machinery of abstract algebra for equation solving outside the reach of databases. However, such equation solving would be a key functionality that problems such as query equivalence testing and data integration could be reduced to: In the current scenario of the presence of an asymmetric additive operation they are undecidable. Moreover, query languages with a symmetric additive operation (i.e., which has an inverse and is thus based on ring theory) would open up databases for a large range of new scientific and mathematical applications.
The goal of the proposed project is to reinvent database management systems with a foundation in abstract algebra and specifically in ring theory. The presence of an additive inverse allows to cleanly define differences between queries. This gives rise to a database analog of differential calculus that leads to radically new incremental and adaptive query evaluation algorithms that substantially outperform the state of the art techniques. These algorithms enable a new class of systems which I call Dynamic Data Management Systems. Such systems can maintain continuously fresh query views at extremely high update rates and have important applications in interactive Large-scale Data Analysis. There is a natural connection between differences and updates, motivating the group theoretic study of updates that will lead to better ways of creating out-of-core data processing algorithms for new storage devices. Basing queries on ring theory leads to a new class of systems, Algebraic Data Management Systems, which herald a convergence of database systems and computer algebra systems."
Summary
"Contemporary database query languages are ultimately founded on logic and feature an additive operation – usually a form of (multi)set union or disjunction – that is asymmetric in that additions or updates do not always have an inverse. This asymmetry puts a greater part of the machinery of abstract algebra for equation solving outside the reach of databases. However, such equation solving would be a key functionality that problems such as query equivalence testing and data integration could be reduced to: In the current scenario of the presence of an asymmetric additive operation they are undecidable. Moreover, query languages with a symmetric additive operation (i.e., which has an inverse and is thus based on ring theory) would open up databases for a large range of new scientific and mathematical applications.
The goal of the proposed project is to reinvent database management systems with a foundation in abstract algebra and specifically in ring theory. The presence of an additive inverse allows to cleanly define differences between queries. This gives rise to a database analog of differential calculus that leads to radically new incremental and adaptive query evaluation algorithms that substantially outperform the state of the art techniques. These algorithms enable a new class of systems which I call Dynamic Data Management Systems. Such systems can maintain continuously fresh query views at extremely high update rates and have important applications in interactive Large-scale Data Analysis. There is a natural connection between differences and updates, motivating the group theoretic study of updates that will lead to better ways of creating out-of-core data processing algorithms for new storage devices. Basing queries on ring theory leads to a new class of systems, Algebraic Data Management Systems, which herald a convergence of database systems and computer algebra systems."
Max ERC Funding
1 480 548 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym AlgoRNN
Project Recurrent Neural Networks and Related Machines That Learn Algorithms
Researcher (PI) Juergen Schmidhuber
Host Institution (HI) UNIVERSITA DELLA SVIZZERA ITALIANA
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Recurrent neural networks (RNNs) are general parallel-sequential computers. Some learn their programs or weights. Our supervised Long Short-Term Memory (LSTM) RNNs were the first to win pattern recognition contests, and recently enabled best known results in speech and handwriting recognition, machine translation, etc. They are now available to billions of users through the world's most valuable public companies including Google and Apple. Nevertheless, in lots of real-world tasks RNNs do not yet live up to their full potential. Although universal in theory, in practice they fail to learn important types of algorithms. This ERC project will go far beyond today's best RNNs through novel RNN-like systems that address some of the biggest open RNN problems and hottest RNN research topics: (1) How can RNNs learn to control (through internal spotlights of attention) separate large short-memory structures such as sub-networks with fast weights, to improve performance on many natural short-term memory-intensive tasks which are currently hard to learn by RNNs, such as answering detailed questions on recently observed videos? (2) How can such RNN-like systems metalearn entire learning algorithms that outperform the original learning algorithms? (3) How to achieve efficient transfer learning from one RNN-learned set of problem-solving programs to new RNN programs solving new tasks? In other words, how can one RNN-like system actively learn to exploit algorithmic information contained in the programs running on another? We will test our systems existing benchmarks, and create new, more challenging multi-task benchmarks. This will be supported by a rather cheap, GPU-based mini-brain for implementing large RNNs.
Summary
Recurrent neural networks (RNNs) are general parallel-sequential computers. Some learn their programs or weights. Our supervised Long Short-Term Memory (LSTM) RNNs were the first to win pattern recognition contests, and recently enabled best known results in speech and handwriting recognition, machine translation, etc. They are now available to billions of users through the world's most valuable public companies including Google and Apple. Nevertheless, in lots of real-world tasks RNNs do not yet live up to their full potential. Although universal in theory, in practice they fail to learn important types of algorithms. This ERC project will go far beyond today's best RNNs through novel RNN-like systems that address some of the biggest open RNN problems and hottest RNN research topics: (1) How can RNNs learn to control (through internal spotlights of attention) separate large short-memory structures such as sub-networks with fast weights, to improve performance on many natural short-term memory-intensive tasks which are currently hard to learn by RNNs, such as answering detailed questions on recently observed videos? (2) How can such RNN-like systems metalearn entire learning algorithms that outperform the original learning algorithms? (3) How to achieve efficient transfer learning from one RNN-learned set of problem-solving programs to new RNN programs solving new tasks? In other words, how can one RNN-like system actively learn to exploit algorithmic information contained in the programs running on another? We will test our systems existing benchmarks, and create new, more challenging multi-task benchmarks. This will be supported by a rather cheap, GPU-based mini-brain for implementing large RNNs.
Max ERC Funding
2 500 000 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym AMDROMA
Project Algorithmic and Mechanism Design Research in Online MArkets
Researcher (PI) Stefano LEONARDI
Host Institution (HI) UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Online markets currently form an important share of the global economy. The Internet hosts classical markets (real-estate, stocks, e-commerce) as well allowing new markets with previously unknown features (web-based advertisement, viral marketing, digital goods, crowdsourcing, sharing economy). Algorithms play a central role in many decision processes involved in online markets. For example, algorithms run electronic auctions, trade stocks, adjusts prices dynamically, and harvest big data to provide economic information. Thus, it is of paramount importance to understand the algorithmic and mechanism design foundations of online markets.
The algorithmic research issues that we consider involve algorithmic mechanism design, online and approximation algorithms, modelling uncertainty in online market design, and large-scale data analysisonline and approximation algorithms, large-scale optimization and data mining. The aim of this research project is to combine these fields to consider research questions that are central for today's Internet economy. We plan to apply these techniques so as to solve fundamental algorithmic problems motivated by web-basedInternet advertisement, Internet market designsharing economy, and crowdsourcingonline labour marketplaces. While my planned research is focussedcentered on foundational work with rigorous design and analysis of in algorithms and mechanismsic design and analysis, it will also include as an important component empirical validation on large-scale real-life datasets.
Summary
Online markets currently form an important share of the global economy. The Internet hosts classical markets (real-estate, stocks, e-commerce) as well allowing new markets with previously unknown features (web-based advertisement, viral marketing, digital goods, crowdsourcing, sharing economy). Algorithms play a central role in many decision processes involved in online markets. For example, algorithms run electronic auctions, trade stocks, adjusts prices dynamically, and harvest big data to provide economic information. Thus, it is of paramount importance to understand the algorithmic and mechanism design foundations of online markets.
The algorithmic research issues that we consider involve algorithmic mechanism design, online and approximation algorithms, modelling uncertainty in online market design, and large-scale data analysisonline and approximation algorithms, large-scale optimization and data mining. The aim of this research project is to combine these fields to consider research questions that are central for today's Internet economy. We plan to apply these techniques so as to solve fundamental algorithmic problems motivated by web-basedInternet advertisement, Internet market designsharing economy, and crowdsourcingonline labour marketplaces. While my planned research is focussedcentered on foundational work with rigorous design and analysis of in algorithms and mechanismsic design and analysis, it will also include as an important component empirical validation on large-scale real-life datasets.
Max ERC Funding
1 780 150 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym AOC
Project Adversary-Oriented Computing
Researcher (PI) Rachid Guerraoui
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary "Recent technological evolutions, including the cloud, the multicore, the social and the mobiles ones, are turning computing ubiquitously distributed. Yet, building high-assurance distributed programs is notoriously challenging. One of the main reasons is that these systems usually seek to achieve several goals at the same time. In short, they need to be efficient, responding effectively in various average-case conditions, as well as reliable, behaving correctly in severe, worst-case conditions. As a consequence, they typically intermingle different strategies: each to cope with some specific condition, e.g., with or without node failures, message losses, time-outs, contention, cache misses,
over-sizing, malicious attacks, etc. The resulting programs end up hard to design, prove, verify, implement, test and debug. Not surprisingly, there are anecdotal evidences of the fragility of the most celebrated distributed systems.
The goal of this project is to contribute to building high-assurance distributed programs by introducing a new dimension for separating and isolating their concerns, as well as a new scheme for composing and reusing them in a modular manner. In short, the project will explore the inherent power and limitations of a novel paradigm, Adversary-Oriented Computing (AOC). Sub-programs, each implementing a specific strategy to cope with a given adversary, modelling a specific working condition, are designed, proved, verified, implemented, tested and debugged independently. They are then composed, possibly dynamically, as black-boxes within the same global program. The AOC project is ambitious and it seeks to fundamentally revisit the way distributed algorithms are designed and distributed systems are implemented. The gain expected in comparison with today's approaches is substantial, and I believe it will be proportional to the degree of difficulty of the distributed problem at hand."
Summary
"Recent technological evolutions, including the cloud, the multicore, the social and the mobiles ones, are turning computing ubiquitously distributed. Yet, building high-assurance distributed programs is notoriously challenging. One of the main reasons is that these systems usually seek to achieve several goals at the same time. In short, they need to be efficient, responding effectively in various average-case conditions, as well as reliable, behaving correctly in severe, worst-case conditions. As a consequence, they typically intermingle different strategies: each to cope with some specific condition, e.g., with or without node failures, message losses, time-outs, contention, cache misses,
over-sizing, malicious attacks, etc. The resulting programs end up hard to design, prove, verify, implement, test and debug. Not surprisingly, there are anecdotal evidences of the fragility of the most celebrated distributed systems.
The goal of this project is to contribute to building high-assurance distributed programs by introducing a new dimension for separating and isolating their concerns, as well as a new scheme for composing and reusing them in a modular manner. In short, the project will explore the inherent power and limitations of a novel paradigm, Adversary-Oriented Computing (AOC). Sub-programs, each implementing a specific strategy to cope with a given adversary, modelling a specific working condition, are designed, proved, verified, implemented, tested and debugged independently. They are then composed, possibly dynamically, as black-boxes within the same global program. The AOC project is ambitious and it seeks to fundamentally revisit the way distributed algorithms are designed and distributed systems are implemented. The gain expected in comparison with today's approaches is substantial, and I believe it will be proportional to the degree of difficulty of the distributed problem at hand."
Max ERC Funding
2 147 012 €
Duration
Start date: 2014-06-01, End date: 2019-05-31
Project acronym AXIAL.EC
Project PRINCIPLES OF AXIAL POLARITY-DRIVEN VASCULAR PATTERNING
Researcher (PI) Claudio Franco
Host Institution (HI) INSTITUTO DE MEDICINA MOLECULAR JOAO LOBO ANTUNES
Call Details Starting Grant (StG), LS4, ERC-2015-STG
Summary The formation of a functional patterned vascular network is essential for development, tissue growth and organ physiology. Several human vascular disorders arise from the mis-patterning of blood vessels, such as arteriovenous malformations, aneurysms and diabetic retinopathy. Although blood flow is recognised as a stimulus for vascular patterning, very little is known about the molecular mechanisms that regulate endothelial cell behaviour in response to flow and promote vascular patterning.
Recently, we uncovered that endothelial cells migrate extensively in the immature vascular network, and that endothelial cells polarise against the blood flow direction. Here, we put forward the hypothesis that vascular patterning is dependent on the polarisation and migration of endothelial cells against the flow direction, in a continuous flux of cells going from low-shear stress to high-shear stress regions. We will establish new reporter mouse lines to observe and manipulate endothelial polarity in vivo in order to investigate how polarisation and coordination of endothelial cells movements are orchestrated to generate vascular patterning. We will manipulate cell polarity using mouse models to understand the importance of cell polarisation in vascular patterning. Also, using a unique zebrafish line allowing analysis of endothelial cell polarity, we will perform a screen to identify novel regulators of vascular patterning. Finally, we will explore the hypothesis that defective flow-dependent endothelial polarisation underlies arteriovenous malformations using two genetic models.
This integrative approach, based on high-resolution imaging and unique experimental models, will provide a unifying model defining the cellular and molecular principles involved in vascular patterning. Given the physiological relevance of vascular patterning in health and disease, this research plan will set the basis for the development of novel clinical therapies targeting vascular disorders.
Summary
The formation of a functional patterned vascular network is essential for development, tissue growth and organ physiology. Several human vascular disorders arise from the mis-patterning of blood vessels, such as arteriovenous malformations, aneurysms and diabetic retinopathy. Although blood flow is recognised as a stimulus for vascular patterning, very little is known about the molecular mechanisms that regulate endothelial cell behaviour in response to flow and promote vascular patterning.
Recently, we uncovered that endothelial cells migrate extensively in the immature vascular network, and that endothelial cells polarise against the blood flow direction. Here, we put forward the hypothesis that vascular patterning is dependent on the polarisation and migration of endothelial cells against the flow direction, in a continuous flux of cells going from low-shear stress to high-shear stress regions. We will establish new reporter mouse lines to observe and manipulate endothelial polarity in vivo in order to investigate how polarisation and coordination of endothelial cells movements are orchestrated to generate vascular patterning. We will manipulate cell polarity using mouse models to understand the importance of cell polarisation in vascular patterning. Also, using a unique zebrafish line allowing analysis of endothelial cell polarity, we will perform a screen to identify novel regulators of vascular patterning. Finally, we will explore the hypothesis that defective flow-dependent endothelial polarisation underlies arteriovenous malformations using two genetic models.
This integrative approach, based on high-resolution imaging and unique experimental models, will provide a unifying model defining the cellular and molecular principles involved in vascular patterning. Given the physiological relevance of vascular patterning in health and disease, this research plan will set the basis for the development of novel clinical therapies targeting vascular disorders.
Max ERC Funding
1 618 750 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym BDE
Project Beyond Distance Estimates: A New Theory of Heuristics for State-Space Search
Researcher (PI) Malte HELMERT
Host Institution (HI) UNIVERSITAT BASEL
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary "Many problems in computer science can be cast as state-space search, where the
objective is to find a path from an initial state to a goal state in a
directed graph called a ""state space"". State-space search is challenging due
to the state explosion problem a.k.a. ""curse of dimensionality"": interesting
state spaces are often astronomically large, defying brute-force exploration.
State-space search has been a core research problem in Artificial Intelligence
since its early days and is alive as ever. Every year, a substantial fraction
of research published at the ICAPS and SoCS conferences is concerned with
state-space search, and the topic is very active at general AI conferences
such as IJCAI and AAAI.
Algorithms in the A* family, dating back to 1968, are still the go-to approach
for state-space search. A* is a graph search algorithm whose only
""intelligence"" stems from a so-called ""heuristic function"", which estimates
the distance from a state to the nearest goal state. The efficiency of A*
depends on the accuracy of this estimate, and decades of research have pushed
the envelope in devising increasingly accurate estimates.
In this project, we question the ""A* + distance estimator"" paradigm and
explore three new directions that go beyond the classical approach:
1. We propose a new paradigm of declarative heuristics, where heuristic
information is not represented as distance estimates, but as properties of
solutions amenable to introspection and general reasoning.
2. We suggest moving the burden of creativity away from the human expert by
casting heuristic design as a meta-optimization problem that can be solved
automatically.
3. We propose abandoning the idea of exploring sequential paths in state
spaces, instead transforming state-space search into combinatorial
optimization problems with no explicit sequencing aspect. We argue that the
""curse of sequentiality"" is as bad as the curse of dimensionality and must
be addressed head-on."
Summary
"Many problems in computer science can be cast as state-space search, where the
objective is to find a path from an initial state to a goal state in a
directed graph called a ""state space"". State-space search is challenging due
to the state explosion problem a.k.a. ""curse of dimensionality"": interesting
state spaces are often astronomically large, defying brute-force exploration.
State-space search has been a core research problem in Artificial Intelligence
since its early days and is alive as ever. Every year, a substantial fraction
of research published at the ICAPS and SoCS conferences is concerned with
state-space search, and the topic is very active at general AI conferences
such as IJCAI and AAAI.
Algorithms in the A* family, dating back to 1968, are still the go-to approach
for state-space search. A* is a graph search algorithm whose only
""intelligence"" stems from a so-called ""heuristic function"", which estimates
the distance from a state to the nearest goal state. The efficiency of A*
depends on the accuracy of this estimate, and decades of research have pushed
the envelope in devising increasingly accurate estimates.
In this project, we question the ""A* + distance estimator"" paradigm and
explore three new directions that go beyond the classical approach:
1. We propose a new paradigm of declarative heuristics, where heuristic
information is not represented as distance estimates, but as properties of
solutions amenable to introspection and general reasoning.
2. We suggest moving the burden of creativity away from the human expert by
casting heuristic design as a meta-optimization problem that can be solved
automatically.
3. We propose abandoning the idea of exploring sequential paths in state
spaces, instead transforming state-space search into combinatorial
optimization problems with no explicit sequencing aspect. We argue that the
""curse of sequentiality"" is as bad as the curse of dimensionality and must
be addressed head-on."
Max ERC Funding
1 997 510 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym BIGCODE
Project Learning from Big Code: Probabilistic Models, Analysis and Synthesis
Researcher (PI) Martin Vechev
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Summary
The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym BONEPHAGY
Project Defining the role of the FGF – autophagy axis in bone physiology
Researcher (PI) Carmine SETTEMBRE
Host Institution (HI) FONDAZIONE TELETHON
Call Details Starting Grant (StG), LS4, ERC-2016-STG
Summary Autophagy is a fundamental cellular catabolic process deputed to the degradation and recycling of a variety of intracellular materials. Autophagy plays a significant role in multiple human physio-pathological processes and is now emerging as a critical regulator of skeletal development and homeostasis. We have discovered that during postnatal development in mice, the growth factor FGF18 induces autophagy in the chondrocyte cells of the growth plate to regulate the secretion of type II collagen, a major component of cartilaginous extracellular matrix. The FGF signaling pathways play crucial roles during skeletal development and maintenance and are deregulated in many skeletal disorders. Hence our findings may offer the unique opportunity to uncover new molecular mechanisms through which FGF pathways regulate skeletal development and maintenance and to identify new targets for the treatment of FGF-related skeletal disorders. In this grant application we propose to study the role played by the different FGF ligands and receptors on autophagy regulation and to investigate the physiological relevance of these findings in the context of skeletal growth, homeostasis and maintenance. We will also investigate the intracellular machinery that links FGF signalling pathways to the regulation of autophagy. In addition, we generated preliminary data showing an impairment of autophagy in chondrocyte models of Achondroplasia (ACH) and Thanathoporic dysplasia, two skeletal disorders caused by mutations in FGFR3. We propose to study the role of autophagy in the pathogenesis of FGFR3-related dwarfisms and explore the pharmacological modulation of autophagy as new therapeutic approach for achondroplasia. This application, which combines cell biology, mouse genetics and pharmacological approaches, has the potential to shed light on new mechanisms involved in organismal development and homeostasis, which could be targeted to treat bone and cartilage diseases.
Summary
Autophagy is a fundamental cellular catabolic process deputed to the degradation and recycling of a variety of intracellular materials. Autophagy plays a significant role in multiple human physio-pathological processes and is now emerging as a critical regulator of skeletal development and homeostasis. We have discovered that during postnatal development in mice, the growth factor FGF18 induces autophagy in the chondrocyte cells of the growth plate to regulate the secretion of type II collagen, a major component of cartilaginous extracellular matrix. The FGF signaling pathways play crucial roles during skeletal development and maintenance and are deregulated in many skeletal disorders. Hence our findings may offer the unique opportunity to uncover new molecular mechanisms through which FGF pathways regulate skeletal development and maintenance and to identify new targets for the treatment of FGF-related skeletal disorders. In this grant application we propose to study the role played by the different FGF ligands and receptors on autophagy regulation and to investigate the physiological relevance of these findings in the context of skeletal growth, homeostasis and maintenance. We will also investigate the intracellular machinery that links FGF signalling pathways to the regulation of autophagy. In addition, we generated preliminary data showing an impairment of autophagy in chondrocyte models of Achondroplasia (ACH) and Thanathoporic dysplasia, two skeletal disorders caused by mutations in FGFR3. We propose to study the role of autophagy in the pathogenesis of FGFR3-related dwarfisms and explore the pharmacological modulation of autophagy as new therapeutic approach for achondroplasia. This application, which combines cell biology, mouse genetics and pharmacological approaches, has the potential to shed light on new mechanisms involved in organismal development and homeostasis, which could be targeted to treat bone and cartilage diseases.
Max ERC Funding
1 586 430 €
Duration
Start date: 2017-01-01, End date: 2021-12-31
Project acronym BRITE
Project Elucidating the molecular mechanisms underlying brite adipocyte specification and activation
Researcher (PI) Ferdinand VON MEYENN
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), LS4, ERC-2018-STG
Summary Brown adipocytes can dissipate energy in a process called adaptive thermogenesis. Whilst the classical brown adipose tissue (BAT) depots disappear during early life in humans, cold exposure can promote the appearance of brown-like adipocytes within the white adipose tissue (WAT), termed brite (brown-in-white). Increased BAT activity results in increased energy expenditure and has been correlated with leanness in humans. Hence, recruitment of brite adipocytes may constitute a promising therapeutic strategy to treat obesity and its associated metabolic diseases. Despite the beneficial metabolic properties of brown and brite adipocytes, little is known about the molecular mechanisms underlying their specification and activation in vivo. This proposal focuses on understanding the complex biology of thermogenic adipocyte biology by studying the epigenetic and transcriptional aspects of WAT britening and BAT recruitment in vivo to identify pathways of therapeutic relevance and to better define the brite precursor cells. Specific aims are to 1) investigate epigenetic and transcriptional states and heterogeneity in human and mouse adipose tissue; 2) develop a novel time-resolved method to correlate preceding chromatin states and cell fate decisions during adipose tissue remodelling; 3) identify and validate key (drugable) epigenetic and transcriptional regulators involved in brite adipocyte specification. Experimentally, I will use adipose tissue samples from human donors and mouse models, to asses at the single-cell level cellular heterogeneity, transcriptional and epigenetic states, to identify subpopulations, and to define the adaptive responses to cold or β-adrenergic stimulation. Using computational methods and in vitro and in vivo validation experiments, I will define epigenetic and transcriptional networks that control WAT britening, and develop a model of the molecular events underlying adipocyte tissue plasticity.
Summary
Brown adipocytes can dissipate energy in a process called adaptive thermogenesis. Whilst the classical brown adipose tissue (BAT) depots disappear during early life in humans, cold exposure can promote the appearance of brown-like adipocytes within the white adipose tissue (WAT), termed brite (brown-in-white). Increased BAT activity results in increased energy expenditure and has been correlated with leanness in humans. Hence, recruitment of brite adipocytes may constitute a promising therapeutic strategy to treat obesity and its associated metabolic diseases. Despite the beneficial metabolic properties of brown and brite adipocytes, little is known about the molecular mechanisms underlying their specification and activation in vivo. This proposal focuses on understanding the complex biology of thermogenic adipocyte biology by studying the epigenetic and transcriptional aspects of WAT britening and BAT recruitment in vivo to identify pathways of therapeutic relevance and to better define the brite precursor cells. Specific aims are to 1) investigate epigenetic and transcriptional states and heterogeneity in human and mouse adipose tissue; 2) develop a novel time-resolved method to correlate preceding chromatin states and cell fate decisions during adipose tissue remodelling; 3) identify and validate key (drugable) epigenetic and transcriptional regulators involved in brite adipocyte specification. Experimentally, I will use adipose tissue samples from human donors and mouse models, to asses at the single-cell level cellular heterogeneity, transcriptional and epigenetic states, to identify subpopulations, and to define the adaptive responses to cold or β-adrenergic stimulation. Using computational methods and in vitro and in vivo validation experiments, I will define epigenetic and transcriptional networks that control WAT britening, and develop a model of the molecular events underlying adipocyte tissue plasticity.
Max ERC Funding
1 552 620 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym CARDIOEPIGEN
Project Epigenetics and microRNAs in Myocardial Function and Disease
Researcher (PI) Gianluigi Condorelli
Host Institution (HI) HUMANITAS MIRASOLE SPA
Call Details Advanced Grant (AdG), LS4, ERC-2011-ADG_20110310
Summary Heart failure (HF) is the ultimate outcome of many cardiovascular diseases. Re-expression of fetal genes in the adult heart contributes to development of HF. Two mechanisms involved in the control of gene expression are epigenetics and microRNAs (miRs). We propose a project on epigenetic and miR-mediated mechanisms leading to HF.
Epigenetics refers to heritable modification of DNA and histones that does not modify the genetic code. Depending on the type of modification and on the site affected, these chemical changes up- or down-regulate transcription of specific genes. Despite it being a major player in gene regulation, epigenetics has been only partly investigated in HF. miRs are regulatory RNAs that target mRNAs for inhibition. Dysregulation of the cardiac miR signature occurs in HF. miR expression may itself be under epigenetic control, constituting a miR-epigenetic regulatory network. To our knowledge, this possibility has not been studied yet.
Our specific hypothesis is that the profile of DNA/histone methylation and the cross-talk between epigenetic enzymes and miRs have fundamental roles in defining the characteristics of cells during cardiac development and that the dysregulation of these processes determines the deleterious nature of the stressed heart’s gene programme. We will test this first through a genome-wide study of DNA/histone methylation to generate maps of the main methylation modifications occurring in the genome of cardiac cells treated with a pro-hypertrophy regulator and of a HF model. We will then investigate the role of epigenetic enzymes deemed important in HF, through the generation and study of knockout mice models. Finally, we will test the possible therapeutic potential of modulating epigenetic genes.
We hope to further understand the pathological mechanisms leading to HF and to generate data instrumental to the development of diagnostic and therapeutic strategies for this disease.
Summary
Heart failure (HF) is the ultimate outcome of many cardiovascular diseases. Re-expression of fetal genes in the adult heart contributes to development of HF. Two mechanisms involved in the control of gene expression are epigenetics and microRNAs (miRs). We propose a project on epigenetic and miR-mediated mechanisms leading to HF.
Epigenetics refers to heritable modification of DNA and histones that does not modify the genetic code. Depending on the type of modification and on the site affected, these chemical changes up- or down-regulate transcription of specific genes. Despite it being a major player in gene regulation, epigenetics has been only partly investigated in HF. miRs are regulatory RNAs that target mRNAs for inhibition. Dysregulation of the cardiac miR signature occurs in HF. miR expression may itself be under epigenetic control, constituting a miR-epigenetic regulatory network. To our knowledge, this possibility has not been studied yet.
Our specific hypothesis is that the profile of DNA/histone methylation and the cross-talk between epigenetic enzymes and miRs have fundamental roles in defining the characteristics of cells during cardiac development and that the dysregulation of these processes determines the deleterious nature of the stressed heart’s gene programme. We will test this first through a genome-wide study of DNA/histone methylation to generate maps of the main methylation modifications occurring in the genome of cardiac cells treated with a pro-hypertrophy regulator and of a HF model. We will then investigate the role of epigenetic enzymes deemed important in HF, through the generation and study of knockout mice models. Finally, we will test the possible therapeutic potential of modulating epigenetic genes.
We hope to further understand the pathological mechanisms leading to HF and to generate data instrumental to the development of diagnostic and therapeutic strategies for this disease.
Max ERC Funding
2 500 000 €
Duration
Start date: 2012-10-01, End date: 2018-09-30