Project acronym ADIPODIF
Project Adipocyte Differentiation and Metabolic Functions in Obesity and Type 2 Diabetes
Researcher (PI) Christian Wolfrum
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), LS6, ERC-2007-StG
Summary Obesity associated disorders such as T2D, hypertension and CVD, commonly referred to as the “metabolic syndrome”, are prevalent diseases of industrialized societies. Deranged adipose tissue proliferation and differentiation contribute significantly to the development of these metabolic disorders. Comparatively little however is known, about how these processes influence the development of metabolic disorders. Using a multidisciplinary approach, I plan to elucidate molecular mechanisms underlying the altered adipocyte differentiation and maturation in different models of obesity associated metabolic disorders. Special emphasis will be given to the analysis of gene expression, postranslational modifications and lipid molecular species composition. To achieve this goal, I am establishing several novel methods to isolate pure primary preadipocytes including a new animal model that will allow me to monitor preadipocytes, in vivo and track their cellular fate in the context of a complete organism. These systems will allow, for the first time to study preadipocyte biology, in an in vivo setting. By monitoring preadipocyte differentiation in vivo, I will also be able to answer the key questions regarding the development of preadipocytes and examine signals that induce or inhibit their differentiation. Using transplantation techniques, I will elucidate the genetic and environmental contributions to the progression of obesity and its associated metabolic disorders. Furthermore, these studies will integrate a lipidomics approach to systematically analyze lipid molecular species composition in different models of metabolic disorders. My studies will provide new insights into the mechanisms and dynamics underlying adipocyte differentiation and maturation, and relate them to metabolic disorders. Detailed knowledge of these mechanisms will facilitate development of novel therapeutic approaches for the treatment of obesity and associated metabolic disorders.
Summary
Obesity associated disorders such as T2D, hypertension and CVD, commonly referred to as the “metabolic syndrome”, are prevalent diseases of industrialized societies. Deranged adipose tissue proliferation and differentiation contribute significantly to the development of these metabolic disorders. Comparatively little however is known, about how these processes influence the development of metabolic disorders. Using a multidisciplinary approach, I plan to elucidate molecular mechanisms underlying the altered adipocyte differentiation and maturation in different models of obesity associated metabolic disorders. Special emphasis will be given to the analysis of gene expression, postranslational modifications and lipid molecular species composition. To achieve this goal, I am establishing several novel methods to isolate pure primary preadipocytes including a new animal model that will allow me to monitor preadipocytes, in vivo and track their cellular fate in the context of a complete organism. These systems will allow, for the first time to study preadipocyte biology, in an in vivo setting. By monitoring preadipocyte differentiation in vivo, I will also be able to answer the key questions regarding the development of preadipocytes and examine signals that induce or inhibit their differentiation. Using transplantation techniques, I will elucidate the genetic and environmental contributions to the progression of obesity and its associated metabolic disorders. Furthermore, these studies will integrate a lipidomics approach to systematically analyze lipid molecular species composition in different models of metabolic disorders. My studies will provide new insights into the mechanisms and dynamics underlying adipocyte differentiation and maturation, and relate them to metabolic disorders. Detailed knowledge of these mechanisms will facilitate development of novel therapeutic approaches for the treatment of obesity and associated metabolic disorders.
Max ERC Funding
1 607 105 €
Duration
Start date: 2008-07-01, End date: 2013-06-30
Project acronym ALGILE
Project Foundations of Algebraic and Dynamic Data Management Systems
Researcher (PI) Christoph Koch
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary "Contemporary database query languages are ultimately founded on logic and feature an additive operation – usually a form of (multi)set union or disjunction – that is asymmetric in that additions or updates do not always have an inverse. This asymmetry puts a greater part of the machinery of abstract algebra for equation solving outside the reach of databases. However, such equation solving would be a key functionality that problems such as query equivalence testing and data integration could be reduced to: In the current scenario of the presence of an asymmetric additive operation they are undecidable. Moreover, query languages with a symmetric additive operation (i.e., which has an inverse and is thus based on ring theory) would open up databases for a large range of new scientific and mathematical applications.
The goal of the proposed project is to reinvent database management systems with a foundation in abstract algebra and specifically in ring theory. The presence of an additive inverse allows to cleanly define differences between queries. This gives rise to a database analog of differential calculus that leads to radically new incremental and adaptive query evaluation algorithms that substantially outperform the state of the art techniques. These algorithms enable a new class of systems which I call Dynamic Data Management Systems. Such systems can maintain continuously fresh query views at extremely high update rates and have important applications in interactive Large-scale Data Analysis. There is a natural connection between differences and updates, motivating the group theoretic study of updates that will lead to better ways of creating out-of-core data processing algorithms for new storage devices. Basing queries on ring theory leads to a new class of systems, Algebraic Data Management Systems, which herald a convergence of database systems and computer algebra systems."
Summary
"Contemporary database query languages are ultimately founded on logic and feature an additive operation – usually a form of (multi)set union or disjunction – that is asymmetric in that additions or updates do not always have an inverse. This asymmetry puts a greater part of the machinery of abstract algebra for equation solving outside the reach of databases. However, such equation solving would be a key functionality that problems such as query equivalence testing and data integration could be reduced to: In the current scenario of the presence of an asymmetric additive operation they are undecidable. Moreover, query languages with a symmetric additive operation (i.e., which has an inverse and is thus based on ring theory) would open up databases for a large range of new scientific and mathematical applications.
The goal of the proposed project is to reinvent database management systems with a foundation in abstract algebra and specifically in ring theory. The presence of an additive inverse allows to cleanly define differences between queries. This gives rise to a database analog of differential calculus that leads to radically new incremental and adaptive query evaluation algorithms that substantially outperform the state of the art techniques. These algorithms enable a new class of systems which I call Dynamic Data Management Systems. Such systems can maintain continuously fresh query views at extremely high update rates and have important applications in interactive Large-scale Data Analysis. There is a natural connection between differences and updates, motivating the group theoretic study of updates that will lead to better ways of creating out-of-core data processing algorithms for new storage devices. Basing queries on ring theory leads to a new class of systems, Algebraic Data Management Systems, which herald a convergence of database systems and computer algebra systems."
Max ERC Funding
1 480 548 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym ALMP_ECON
Project Effective evaluation of active labour market policies in social insurance programs - improving the interaction between econometric evaluation estimators and economic theory
Researcher (PI) Bas Van Der Klaauw
Host Institution (HI) STICHTING VU
Call Details Starting Grant (StG), SH1, ERC-2007-StG
Summary In most European countries social insurance programs, like welfare, unemployment insurance and disability insurance are characterized by low reemployment rates. Therefore, governments spend huge amounts of money on active labour market programs, which should help individuals in finding work. Recent surveys indicate that programs which aim at intensifying job search behaviour are much more effective than schooling programs for improving human capital. A second conclusion from these surveys is that despite the size of the spendings on these programs, evidence on its effectiveness is limited. This research proposal aims at developing an economic framework that will be used to evaluate the effectiveness of popular programs like offering reemployment bonuses, fraud detection, workfare and job search monitoring. The main innovation is that I will combine economic theory with recently developed econometric techniques and detailed administrative data sets, which have not been explored before. While most of the literature only focuses on short-term outcomes, the available data allow me to also consider the long-term effectiveness of programs. The key advantage of an economic model is that I can compare the effectiveness of the different programs, consider modifications of programs and combinations of programs. Furthermore, using an economic model I can construct profiling measures to improve the targeting of programs to subsamples of the population. This is particularly relevant if the effectiveness of programs differs between individuals or depends on the moment in time the program is offered. Therefore, the results from this research will not only be of scientific interest, but will also be of great value to policymakers.
Summary
In most European countries social insurance programs, like welfare, unemployment insurance and disability insurance are characterized by low reemployment rates. Therefore, governments spend huge amounts of money on active labour market programs, which should help individuals in finding work. Recent surveys indicate that programs which aim at intensifying job search behaviour are much more effective than schooling programs for improving human capital. A second conclusion from these surveys is that despite the size of the spendings on these programs, evidence on its effectiveness is limited. This research proposal aims at developing an economic framework that will be used to evaluate the effectiveness of popular programs like offering reemployment bonuses, fraud detection, workfare and job search monitoring. The main innovation is that I will combine economic theory with recently developed econometric techniques and detailed administrative data sets, which have not been explored before. While most of the literature only focuses on short-term outcomes, the available data allow me to also consider the long-term effectiveness of programs. The key advantage of an economic model is that I can compare the effectiveness of the different programs, consider modifications of programs and combinations of programs. Furthermore, using an economic model I can construct profiling measures to improve the targeting of programs to subsamples of the population. This is particularly relevant if the effectiveness of programs differs between individuals or depends on the moment in time the program is offered. Therefore, the results from this research will not only be of scientific interest, but will also be of great value to policymakers.
Max ERC Funding
550 000 €
Duration
Start date: 2008-07-01, End date: 2013-06-30
Project acronym AUTOMATION
Project AUTOMATION AND INCOME DISTRIBUTION: A QUANTITATIVE ASSESSMENT
Researcher (PI) David Hémous
Host Institution (HI) UNIVERSITAT ZURICH
Call Details Starting Grant (StG), SH1, ERC-2018-STG
Summary Since the invention of the spinning frame, automation has been one of the drivers of economic growth. Yet, workers, economist or the general public have been concerned that automation may destroy jobs or create inequality. This concern is particularly prevalent today with the sustained rise in economic inequality and fast technological progress in IT, robotics or self-driving cars. The empirical literature has showed the impact of automation on income distribution. Yet, the level of wages itself should also affect the incentives to undertake automation innovations. Understanding this feedback is key to assess the long-term effect of policies. My project aims to provide the first quantitative account of the two-way relationship between automation and the income distribution.
It is articulated around three parts. First, I will use patent data to study empirically the causal effect of wages on automation innovations. To do so, I will build firm-level variation in the wages of the customers of innovating firms by exploiting variations in firms’ exposure to international markets. Second, I will study empirically the causal effect of automation innovations on wages. There, I will focus on local labour market and use the patent data to build exogenous variations in local knowledge. Third, I will calibrate an endogenous growth model with firm dynamics and automation using Danish firm-level data. The model will replicate stylized facts on the labour share distribution across firms. It will be used to compute the contribution of automation to economic growth or the decline of the labour share. Moreover, as a whole, the project will use two different methods (regression analysis and calibrated model) and two different types of data, to answer questions of crucial policy importance such as: Taking into account the response of automation, what are the long-term effects on wages of an increase in the minimum wage, a reduction in labour costs, or a robot tax?
Summary
Since the invention of the spinning frame, automation has been one of the drivers of economic growth. Yet, workers, economist or the general public have been concerned that automation may destroy jobs or create inequality. This concern is particularly prevalent today with the sustained rise in economic inequality and fast technological progress in IT, robotics or self-driving cars. The empirical literature has showed the impact of automation on income distribution. Yet, the level of wages itself should also affect the incentives to undertake automation innovations. Understanding this feedback is key to assess the long-term effect of policies. My project aims to provide the first quantitative account of the two-way relationship between automation and the income distribution.
It is articulated around three parts. First, I will use patent data to study empirically the causal effect of wages on automation innovations. To do so, I will build firm-level variation in the wages of the customers of innovating firms by exploiting variations in firms’ exposure to international markets. Second, I will study empirically the causal effect of automation innovations on wages. There, I will focus on local labour market and use the patent data to build exogenous variations in local knowledge. Third, I will calibrate an endogenous growth model with firm dynamics and automation using Danish firm-level data. The model will replicate stylized facts on the labour share distribution across firms. It will be used to compute the contribution of automation to economic growth or the decline of the labour share. Moreover, as a whole, the project will use two different methods (regression analysis and calibrated model) and two different types of data, to answer questions of crucial policy importance such as: Taking into account the response of automation, what are the long-term effects on wages of an increase in the minimum wage, a reduction in labour costs, or a robot tax?
Max ERC Funding
1 295 890 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym BayesianMarkets
Project Bayesian markets for unverifiable truths
Researcher (PI) Aurelien Baillon
Host Institution (HI) ERASMUS UNIVERSITEIT ROTTERDAM
Call Details Starting Grant (StG), SH1, ERC-2014-STG
Summary Subjective data play an increasing role in modern economics. For instance, new welfare measurements are based on people’s subjective assessments of their happiness or their life satisfaction. A problem of such measurements is that people have no incentives to tell the truth. To solve this problem and make those measurements incentive compatible, I will introduce a new market institution, called Bayesian markets.
Imagine we ask people whether they are happy with their life. On Bayesian markets, they will trade an asset whose value is the proportion of people answering Yes. Only those answering Yes will have the right to buy the asset and those answering No the right to sell it. Bayesian updating implies that “Yes” agents predict a higher value of the asset than “No” agents do and, consequently, “Yes” agents want to buy it while “No” agents want to sell it. I will show that truth-telling is then the optimal strategy.
Bayesian markets reward truth-telling the same way as prediction markets (betting markets) reward people for reporting their true subjective probabilities about observable events. Yet, unlike prediction markets, they do not require events to be objectively observable. Bayesian markets apply to any type of unverifiable truths, from one’s own happiness to beliefs about events that will never be observed.
The present research program will first establish the theoretical foundations of Bayesian markets. It will then develop the proper methodology to implement them. Finally, it will disseminate the use of Bayesian markets via applications.
The first application will demonstrate how degrees of expertise can be measured and will apply it to risks related to climate change and nuclear power plants. It will contribute to the political debate by shedding new light on what true experts think about these risks. The second application will provide the first incentivized measures of life satisfaction and happiness.
Summary
Subjective data play an increasing role in modern economics. For instance, new welfare measurements are based on people’s subjective assessments of their happiness or their life satisfaction. A problem of such measurements is that people have no incentives to tell the truth. To solve this problem and make those measurements incentive compatible, I will introduce a new market institution, called Bayesian markets.
Imagine we ask people whether they are happy with their life. On Bayesian markets, they will trade an asset whose value is the proportion of people answering Yes. Only those answering Yes will have the right to buy the asset and those answering No the right to sell it. Bayesian updating implies that “Yes” agents predict a higher value of the asset than “No” agents do and, consequently, “Yes” agents want to buy it while “No” agents want to sell it. I will show that truth-telling is then the optimal strategy.
Bayesian markets reward truth-telling the same way as prediction markets (betting markets) reward people for reporting their true subjective probabilities about observable events. Yet, unlike prediction markets, they do not require events to be objectively observable. Bayesian markets apply to any type of unverifiable truths, from one’s own happiness to beliefs about events that will never be observed.
The present research program will first establish the theoretical foundations of Bayesian markets. It will then develop the proper methodology to implement them. Finally, it will disseminate the use of Bayesian markets via applications.
The first application will demonstrate how degrees of expertise can be measured and will apply it to risks related to climate change and nuclear power plants. It will contribute to the political debate by shedding new light on what true experts think about these risks. The second application will provide the first incentivized measures of life satisfaction and happiness.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-01-01, End date: 2020-12-31
Project acronym BIGCODE
Project Learning from Big Code: Probabilistic Models, Analysis and Synthesis
Researcher (PI) Martin Vechev
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Summary
The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym CAFES
Project Causal Analysis of Feedback Systems
Researcher (PI) Joris Marten Mooij
Host Institution (HI) UNIVERSITEIT VAN AMSTERDAM
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Many questions in science, policy making and everyday life are of a causal nature: how would changing A influence B? Causal inference, a branch of statistics and machine learning, studies how cause-effect relationships can be discovered from data and how these can be used for making predictions in situations where a system has been perturbed by an external intervention. The ability to reliably make such causal predictions is of great value for practical applications in a variety of disciplines. Over the last two decades, remarkable progress has been made in the field. However, even though state-of-the-art causal inference algorithms work well on simulated data when all their assumptions are met, there is still a considerable gap between theory and practice. The goal of CAFES is to bridge that gap by developing theory and algorithms that will enable large-scale applications of causal inference in various challenging domains in science, industry and decision making.
The key challenge that will be addressed is how to deal with cyclic causal relationships ("feedback loops"). Feedback loops are very common in many domains (e.g., biology, economy and climatology), but have mostly been ignored so far in the field. Building on recently established connections between dynamical systems and causal models, CAFES will develop theory and algorithms for causal modeling, reasoning, discovery and prediction for cyclic causal systems. Extensions to stationary and non-stationary processes will be developed to advance the state-of-the-art in causal analysis of time-series data. In order to optimally use available resources, computationally efficient and statistically robust algorithms for causal inference from observational and interventional data in the context of confounders and feedback will be developed. The work will be done with a strong focus on applications in molecular biology, one of the most promising areas for automated causal inference from data.
Summary
Many questions in science, policy making and everyday life are of a causal nature: how would changing A influence B? Causal inference, a branch of statistics and machine learning, studies how cause-effect relationships can be discovered from data and how these can be used for making predictions in situations where a system has been perturbed by an external intervention. The ability to reliably make such causal predictions is of great value for practical applications in a variety of disciplines. Over the last two decades, remarkable progress has been made in the field. However, even though state-of-the-art causal inference algorithms work well on simulated data when all their assumptions are met, there is still a considerable gap between theory and practice. The goal of CAFES is to bridge that gap by developing theory and algorithms that will enable large-scale applications of causal inference in various challenging domains in science, industry and decision making.
The key challenge that will be addressed is how to deal with cyclic causal relationships ("feedback loops"). Feedback loops are very common in many domains (e.g., biology, economy and climatology), but have mostly been ignored so far in the field. Building on recently established connections between dynamical systems and causal models, CAFES will develop theory and algorithms for causal modeling, reasoning, discovery and prediction for cyclic causal systems. Extensions to stationary and non-stationary processes will be developed to advance the state-of-the-art in causal analysis of time-series data. In order to optimally use available resources, computationally efficient and statistically robust algorithms for causal inference from observational and interventional data in the context of confounders and feedback will be developed. The work will be done with a strong focus on applications in molecular biology, one of the most promising areas for automated causal inference from data.
Max ERC Funding
1 405 652 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym CHANGE-POINT TESTS
Project New Results on Structural Change Tests: Theory and Applications
Researcher (PI) Elena Andreou
Host Institution (HI) UNIVERSITY OF CYPRUS
Call Details Starting Grant (StG), SH1, ERC-2007-StG
Summary The research project has two broad objectives and provides novel results in the literature of structural change or change-point tests. The first objective is to provide two new methods for restoring the non-monotone power problem of a large family of structural breaks tests that have been widely used in econometrics and statistics, as well as to show that these methods have additional contributions and can be extended to: (i) tests for a change in persistence, (ii) partial sums tests of cointegration and (iii) tests for changes in dynamic volatility models. The significance of these methods is demonstrated via the consistency of the long-run variance estimator which scales the change-point statistics, the asymptotic properties of the tests, their finite sample performance and their relevance in empirical applications and policy analysis. The second objective is threefold: First, to show that ignoring structural changes in financial time series yields biased and inconsistent risk management (Value at Risk, VaR and Excess Shortfall, ES) estimates and consequently leads to investment misallocations. Second, to propose methods for evaluating the stability of financial time series sequentially or on-line which can be used as a quality control procedure for financial risk management as well as to show that monitoring implied volatilities yields early warning indicators of a changing risk structure. Moreover we show that model averaging in the presence of structural breaks as well as other model uncertainties involved in risk management estimates, can provide robust estimates of VaR and ES. New results are derived on the optimal weights for model averaging in the context of dynamic volatility models and asymmetric loss functions. Third, we propose a novel way to construct prediction-based change-point statistics that reduce the detection delay of existing sequential tests and provide a probability about the likelihood of a structural change.
Summary
The research project has two broad objectives and provides novel results in the literature of structural change or change-point tests. The first objective is to provide two new methods for restoring the non-monotone power problem of a large family of structural breaks tests that have been widely used in econometrics and statistics, as well as to show that these methods have additional contributions and can be extended to: (i) tests for a change in persistence, (ii) partial sums tests of cointegration and (iii) tests for changes in dynamic volatility models. The significance of these methods is demonstrated via the consistency of the long-run variance estimator which scales the change-point statistics, the asymptotic properties of the tests, their finite sample performance and their relevance in empirical applications and policy analysis. The second objective is threefold: First, to show that ignoring structural changes in financial time series yields biased and inconsistent risk management (Value at Risk, VaR and Excess Shortfall, ES) estimates and consequently leads to investment misallocations. Second, to propose methods for evaluating the stability of financial time series sequentially or on-line which can be used as a quality control procedure for financial risk management as well as to show that monitoring implied volatilities yields early warning indicators of a changing risk structure. Moreover we show that model averaging in the presence of structural breaks as well as other model uncertainties involved in risk management estimates, can provide robust estimates of VaR and ES. New results are derived on the optimal weights for model averaging in the context of dynamic volatility models and asymmetric loss functions. Third, we propose a novel way to construct prediction-based change-point statistics that reduce the detection delay of existing sequential tests and provide a probability about the likelihood of a structural change.
Max ERC Funding
517 200 €
Duration
Start date: 2008-09-01, End date: 2013-08-31
Project acronym CIRCUMVENT
Project Closing in on Runx3 and CXCL4 to open novel avenues for therapeutic intervention in systemic sclerosis
Researcher (PI) Timothy Radstake
Host Institution (HI) UNIVERSITAIR MEDISCH CENTRUM UTRECHT
Call Details Starting Grant (StG), LS6, ERC-2011-StG_20101109
Summary Systemic sclerosis (SSc) is an autoimmune disease that culminates in excessive extra-cellular matrix deposition (fibrosis) in skin and internal organs. SSc is a severe disease in which fibrotic events lead to organ failure such as renal failure, deterioration of lung function and development of pulmonary arterial hypertension (PAH). Together, these disease hallmarks culminate in profound disability and premature death.
Over the past three years several crucial observations by my group changed the landscape of our thinking about the ethiopathogenesis of this disease. First, plasmacytoid dendritic (pDCs) cells were found to be extremely frequent in the circulation of SSc patients (1000-fold) compared with healthy individuals. In addition, we observed that pDCs from SSc patients are largely dedicated to synthesize CXCL4 that was proven to be directly implicated in fibroblast biology and endothelial cell activation, two events recapitulating SSc. Finally, research aimed to decipher the underlying cause of this increased pDCs frequency led to the observation that Runx3, a transcription factor that controls the differentiation of DC subsets, was almost not expressed in pDC of SSc patients. Together, these observations led me to pose the “SSc immune postulate” in which the pathogenesis of SSc is explained by a multi-step process in which Runx3 and CXCL4 play a central role.
The project CIRCUMVENT is designed to provide proof of concept for the role of CXCL4 and RUNX3 in SSc. For this aim we will exploit a unique set of patient material (cell subsets, protein and DNA bank), various recently developed in vitro techniques (siRNA for pDCs, viral over expression of CXCL4/RUNX3) and apply three recently optimised experimental models (CXCL4 subcutaneous pump model, DC specific RUNX3 KO and the SCID/NOD/rag2 KO mice).
The project CIRCUMVENT aims to proof the direct role for Runx3 and CXCL4 that could provide the final step towards the development of novel therapeutic targets
Summary
Systemic sclerosis (SSc) is an autoimmune disease that culminates in excessive extra-cellular matrix deposition (fibrosis) in skin and internal organs. SSc is a severe disease in which fibrotic events lead to organ failure such as renal failure, deterioration of lung function and development of pulmonary arterial hypertension (PAH). Together, these disease hallmarks culminate in profound disability and premature death.
Over the past three years several crucial observations by my group changed the landscape of our thinking about the ethiopathogenesis of this disease. First, plasmacytoid dendritic (pDCs) cells were found to be extremely frequent in the circulation of SSc patients (1000-fold) compared with healthy individuals. In addition, we observed that pDCs from SSc patients are largely dedicated to synthesize CXCL4 that was proven to be directly implicated in fibroblast biology and endothelial cell activation, two events recapitulating SSc. Finally, research aimed to decipher the underlying cause of this increased pDCs frequency led to the observation that Runx3, a transcription factor that controls the differentiation of DC subsets, was almost not expressed in pDC of SSc patients. Together, these observations led me to pose the “SSc immune postulate” in which the pathogenesis of SSc is explained by a multi-step process in which Runx3 and CXCL4 play a central role.
The project CIRCUMVENT is designed to provide proof of concept for the role of CXCL4 and RUNX3 in SSc. For this aim we will exploit a unique set of patient material (cell subsets, protein and DNA bank), various recently developed in vitro techniques (siRNA for pDCs, viral over expression of CXCL4/RUNX3) and apply three recently optimised experimental models (CXCL4 subcutaneous pump model, DC specific RUNX3 KO and the SCID/NOD/rag2 KO mice).
The project CIRCUMVENT aims to proof the direct role for Runx3 and CXCL4 that could provide the final step towards the development of novel therapeutic targets
Max ERC Funding
1 500 000 €
Duration
Start date: 2012-08-01, End date: 2017-07-31
Project acronym COhABIT
Project Consequences of helminth-bacterial interactions
Researcher (PI) Nicola Harris
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), LS6, ERC-2012-StG_20111109
Summary "Throughout evolution both intestinal helminths and commensal bacteria have inhabited our intestines. This ""ménage à trois"" situation is likely to have exerted a strong selective pressure on the development of our metabolic and immune systems. Such pressures remain in developing countries, whilst the eradication of helminths in industrialized countries has shifted this evolutionary balance—possibly underlying the increased development of chronic inflammatory diseases. We hypothesize that helminth-bacterial interactions are a key determinant of healthy homeostasis.
Preliminary findings from our laboratory indicate that helminth infection of mice alters the abundance and diversity of intestinal bacteria and impacts on the availability of immuno-modulatory metabolites; this altered environment correlates with a direct health advantage, protecting against inflammatory diseases such as asthma and rheumatoid arthritis. We intend to validate and extend these data in humans by performing bacterial phlyogenetic and metabolic analysis of stool samples collected from a large cohort of children living in a helminth endemic region of Ecuador. We further propose to test our hypothesis that helminth-bacterial interactions contribute to disease modulation using experimental models of infection and disease. We plan to develop and utilize mouse models to elucidate the mechanisms through which bacterial dysbiosis and helminth infection influence the development of chronic inflammatory diseases. These models will be utilized for germ-free and recolonization experiments, investigating the relative contribution of bacteria versus helminthes to host immunity, co-metabolism and disease modulation.
Taking a trans-disciplinary approach, this research will break new ground in our understanding of the crosstalk and pressures between intestinal helminth infection and commensal bacterial communities, and the implications this has for human health."
Summary
"Throughout evolution both intestinal helminths and commensal bacteria have inhabited our intestines. This ""ménage à trois"" situation is likely to have exerted a strong selective pressure on the development of our metabolic and immune systems. Such pressures remain in developing countries, whilst the eradication of helminths in industrialized countries has shifted this evolutionary balance—possibly underlying the increased development of chronic inflammatory diseases. We hypothesize that helminth-bacterial interactions are a key determinant of healthy homeostasis.
Preliminary findings from our laboratory indicate that helminth infection of mice alters the abundance and diversity of intestinal bacteria and impacts on the availability of immuno-modulatory metabolites; this altered environment correlates with a direct health advantage, protecting against inflammatory diseases such as asthma and rheumatoid arthritis. We intend to validate and extend these data in humans by performing bacterial phlyogenetic and metabolic analysis of stool samples collected from a large cohort of children living in a helminth endemic region of Ecuador. We further propose to test our hypothesis that helminth-bacterial interactions contribute to disease modulation using experimental models of infection and disease. We plan to develop and utilize mouse models to elucidate the mechanisms through which bacterial dysbiosis and helminth infection influence the development of chronic inflammatory diseases. These models will be utilized for germ-free and recolonization experiments, investigating the relative contribution of bacteria versus helminthes to host immunity, co-metabolism and disease modulation.
Taking a trans-disciplinary approach, this research will break new ground in our understanding of the crosstalk and pressures between intestinal helminth infection and commensal bacterial communities, and the implications this has for human health."
Max ERC Funding
1 480 612 €
Duration
Start date: 2013-04-01, End date: 2018-03-31