Project acronym ADIPODIF
Project Adipocyte Differentiation and Metabolic Functions in Obesity and Type 2 Diabetes
Researcher (PI) Christian Wolfrum
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), LS6, ERC-2007-StG
Summary Obesity associated disorders such as T2D, hypertension and CVD, commonly referred to as the “metabolic syndrome”, are prevalent diseases of industrialized societies. Deranged adipose tissue proliferation and differentiation contribute significantly to the development of these metabolic disorders. Comparatively little however is known, about how these processes influence the development of metabolic disorders. Using a multidisciplinary approach, I plan to elucidate molecular mechanisms underlying the altered adipocyte differentiation and maturation in different models of obesity associated metabolic disorders. Special emphasis will be given to the analysis of gene expression, postranslational modifications and lipid molecular species composition. To achieve this goal, I am establishing several novel methods to isolate pure primary preadipocytes including a new animal model that will allow me to monitor preadipocytes, in vivo and track their cellular fate in the context of a complete organism. These systems will allow, for the first time to study preadipocyte biology, in an in vivo setting. By monitoring preadipocyte differentiation in vivo, I will also be able to answer the key questions regarding the development of preadipocytes and examine signals that induce or inhibit their differentiation. Using transplantation techniques, I will elucidate the genetic and environmental contributions to the progression of obesity and its associated metabolic disorders. Furthermore, these studies will integrate a lipidomics approach to systematically analyze lipid molecular species composition in different models of metabolic disorders. My studies will provide new insights into the mechanisms and dynamics underlying adipocyte differentiation and maturation, and relate them to metabolic disorders. Detailed knowledge of these mechanisms will facilitate development of novel therapeutic approaches for the treatment of obesity and associated metabolic disorders.
Summary
Obesity associated disorders such as T2D, hypertension and CVD, commonly referred to as the “metabolic syndrome”, are prevalent diseases of industrialized societies. Deranged adipose tissue proliferation and differentiation contribute significantly to the development of these metabolic disorders. Comparatively little however is known, about how these processes influence the development of metabolic disorders. Using a multidisciplinary approach, I plan to elucidate molecular mechanisms underlying the altered adipocyte differentiation and maturation in different models of obesity associated metabolic disorders. Special emphasis will be given to the analysis of gene expression, postranslational modifications and lipid molecular species composition. To achieve this goal, I am establishing several novel methods to isolate pure primary preadipocytes including a new animal model that will allow me to monitor preadipocytes, in vivo and track their cellular fate in the context of a complete organism. These systems will allow, for the first time to study preadipocyte biology, in an in vivo setting. By monitoring preadipocyte differentiation in vivo, I will also be able to answer the key questions regarding the development of preadipocytes and examine signals that induce or inhibit their differentiation. Using transplantation techniques, I will elucidate the genetic and environmental contributions to the progression of obesity and its associated metabolic disorders. Furthermore, these studies will integrate a lipidomics approach to systematically analyze lipid molecular species composition in different models of metabolic disorders. My studies will provide new insights into the mechanisms and dynamics underlying adipocyte differentiation and maturation, and relate them to metabolic disorders. Detailed knowledge of these mechanisms will facilitate development of novel therapeutic approaches for the treatment of obesity and associated metabolic disorders.
Max ERC Funding
1 607 105 €
Duration
Start date: 2008-07-01, End date: 2013-06-30
Project acronym ALGILE
Project Foundations of Algebraic and Dynamic Data Management Systems
Researcher (PI) Christoph Koch
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary "Contemporary database query languages are ultimately founded on logic and feature an additive operation – usually a form of (multi)set union or disjunction – that is asymmetric in that additions or updates do not always have an inverse. This asymmetry puts a greater part of the machinery of abstract algebra for equation solving outside the reach of databases. However, such equation solving would be a key functionality that problems such as query equivalence testing and data integration could be reduced to: In the current scenario of the presence of an asymmetric additive operation they are undecidable. Moreover, query languages with a symmetric additive operation (i.e., which has an inverse and is thus based on ring theory) would open up databases for a large range of new scientific and mathematical applications.
The goal of the proposed project is to reinvent database management systems with a foundation in abstract algebra and specifically in ring theory. The presence of an additive inverse allows to cleanly define differences between queries. This gives rise to a database analog of differential calculus that leads to radically new incremental and adaptive query evaluation algorithms that substantially outperform the state of the art techniques. These algorithms enable a new class of systems which I call Dynamic Data Management Systems. Such systems can maintain continuously fresh query views at extremely high update rates and have important applications in interactive Large-scale Data Analysis. There is a natural connection between differences and updates, motivating the group theoretic study of updates that will lead to better ways of creating out-of-core data processing algorithms for new storage devices. Basing queries on ring theory leads to a new class of systems, Algebraic Data Management Systems, which herald a convergence of database systems and computer algebra systems."
Summary
"Contemporary database query languages are ultimately founded on logic and feature an additive operation – usually a form of (multi)set union or disjunction – that is asymmetric in that additions or updates do not always have an inverse. This asymmetry puts a greater part of the machinery of abstract algebra for equation solving outside the reach of databases. However, such equation solving would be a key functionality that problems such as query equivalence testing and data integration could be reduced to: In the current scenario of the presence of an asymmetric additive operation they are undecidable. Moreover, query languages with a symmetric additive operation (i.e., which has an inverse and is thus based on ring theory) would open up databases for a large range of new scientific and mathematical applications.
The goal of the proposed project is to reinvent database management systems with a foundation in abstract algebra and specifically in ring theory. The presence of an additive inverse allows to cleanly define differences between queries. This gives rise to a database analog of differential calculus that leads to radically new incremental and adaptive query evaluation algorithms that substantially outperform the state of the art techniques. These algorithms enable a new class of systems which I call Dynamic Data Management Systems. Such systems can maintain continuously fresh query views at extremely high update rates and have important applications in interactive Large-scale Data Analysis. There is a natural connection between differences and updates, motivating the group theoretic study of updates that will lead to better ways of creating out-of-core data processing algorithms for new storage devices. Basing queries on ring theory leads to a new class of systems, Algebraic Data Management Systems, which herald a convergence of database systems and computer algebra systems."
Max ERC Funding
1 480 548 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym ALMP_ECON
Project Effective evaluation of active labour market policies in social insurance programs - improving the interaction between econometric evaluation estimators and economic theory
Researcher (PI) Bas Van Der Klaauw
Host Institution (HI) STICHTING VU
Call Details Starting Grant (StG), SH1, ERC-2007-StG
Summary In most European countries social insurance programs, like welfare, unemployment insurance and disability insurance are characterized by low reemployment rates. Therefore, governments spend huge amounts of money on active labour market programs, which should help individuals in finding work. Recent surveys indicate that programs which aim at intensifying job search behaviour are much more effective than schooling programs for improving human capital. A second conclusion from these surveys is that despite the size of the spendings on these programs, evidence on its effectiveness is limited. This research proposal aims at developing an economic framework that will be used to evaluate the effectiveness of popular programs like offering reemployment bonuses, fraud detection, workfare and job search monitoring. The main innovation is that I will combine economic theory with recently developed econometric techniques and detailed administrative data sets, which have not been explored before. While most of the literature only focuses on short-term outcomes, the available data allow me to also consider the long-term effectiveness of programs. The key advantage of an economic model is that I can compare the effectiveness of the different programs, consider modifications of programs and combinations of programs. Furthermore, using an economic model I can construct profiling measures to improve the targeting of programs to subsamples of the population. This is particularly relevant if the effectiveness of programs differs between individuals or depends on the moment in time the program is offered. Therefore, the results from this research will not only be of scientific interest, but will also be of great value to policymakers.
Summary
In most European countries social insurance programs, like welfare, unemployment insurance and disability insurance are characterized by low reemployment rates. Therefore, governments spend huge amounts of money on active labour market programs, which should help individuals in finding work. Recent surveys indicate that programs which aim at intensifying job search behaviour are much more effective than schooling programs for improving human capital. A second conclusion from these surveys is that despite the size of the spendings on these programs, evidence on its effectiveness is limited. This research proposal aims at developing an economic framework that will be used to evaluate the effectiveness of popular programs like offering reemployment bonuses, fraud detection, workfare and job search monitoring. The main innovation is that I will combine economic theory with recently developed econometric techniques and detailed administrative data sets, which have not been explored before. While most of the literature only focuses on short-term outcomes, the available data allow me to also consider the long-term effectiveness of programs. The key advantage of an economic model is that I can compare the effectiveness of the different programs, consider modifications of programs and combinations of programs. Furthermore, using an economic model I can construct profiling measures to improve the targeting of programs to subsamples of the population. This is particularly relevant if the effectiveness of programs differs between individuals or depends on the moment in time the program is offered. Therefore, the results from this research will not only be of scientific interest, but will also be of great value to policymakers.
Max ERC Funding
550 000 €
Duration
Start date: 2008-07-01, End date: 2013-06-30
Project acronym AUTOMATION
Project AUTOMATION AND INCOME DISTRIBUTION: A QUANTITATIVE ASSESSMENT
Researcher (PI) David Hémous
Host Institution (HI) UNIVERSITAT ZURICH
Call Details Starting Grant (StG), SH1, ERC-2018-STG
Summary Since the invention of the spinning frame, automation has been one of the drivers of economic growth. Yet, workers, economist or the general public have been concerned that automation may destroy jobs or create inequality. This concern is particularly prevalent today with the sustained rise in economic inequality and fast technological progress in IT, robotics or self-driving cars. The empirical literature has showed the impact of automation on income distribution. Yet, the level of wages itself should also affect the incentives to undertake automation innovations. Understanding this feedback is key to assess the long-term effect of policies. My project aims to provide the first quantitative account of the two-way relationship between automation and the income distribution.
It is articulated around three parts. First, I will use patent data to study empirically the causal effect of wages on automation innovations. To do so, I will build firm-level variation in the wages of the customers of innovating firms by exploiting variations in firms’ exposure to international markets. Second, I will study empirically the causal effect of automation innovations on wages. There, I will focus on local labour market and use the patent data to build exogenous variations in local knowledge. Third, I will calibrate an endogenous growth model with firm dynamics and automation using Danish firm-level data. The model will replicate stylized facts on the labour share distribution across firms. It will be used to compute the contribution of automation to economic growth or the decline of the labour share. Moreover, as a whole, the project will use two different methods (regression analysis and calibrated model) and two different types of data, to answer questions of crucial policy importance such as: Taking into account the response of automation, what are the long-term effects on wages of an increase in the minimum wage, a reduction in labour costs, or a robot tax?
Summary
Since the invention of the spinning frame, automation has been one of the drivers of economic growth. Yet, workers, economist or the general public have been concerned that automation may destroy jobs or create inequality. This concern is particularly prevalent today with the sustained rise in economic inequality and fast technological progress in IT, robotics or self-driving cars. The empirical literature has showed the impact of automation on income distribution. Yet, the level of wages itself should also affect the incentives to undertake automation innovations. Understanding this feedback is key to assess the long-term effect of policies. My project aims to provide the first quantitative account of the two-way relationship between automation and the income distribution.
It is articulated around three parts. First, I will use patent data to study empirically the causal effect of wages on automation innovations. To do so, I will build firm-level variation in the wages of the customers of innovating firms by exploiting variations in firms’ exposure to international markets. Second, I will study empirically the causal effect of automation innovations on wages. There, I will focus on local labour market and use the patent data to build exogenous variations in local knowledge. Third, I will calibrate an endogenous growth model with firm dynamics and automation using Danish firm-level data. The model will replicate stylized facts on the labour share distribution across firms. It will be used to compute the contribution of automation to economic growth or the decline of the labour share. Moreover, as a whole, the project will use two different methods (regression analysis and calibrated model) and two different types of data, to answer questions of crucial policy importance such as: Taking into account the response of automation, what are the long-term effects on wages of an increase in the minimum wage, a reduction in labour costs, or a robot tax?
Max ERC Funding
1 295 890 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym BayesianMarkets
Project Bayesian markets for unverifiable truths
Researcher (PI) Aurelien Baillon
Host Institution (HI) ERASMUS UNIVERSITEIT ROTTERDAM
Call Details Starting Grant (StG), SH1, ERC-2014-STG
Summary Subjective data play an increasing role in modern economics. For instance, new welfare measurements are based on people’s subjective assessments of their happiness or their life satisfaction. A problem of such measurements is that people have no incentives to tell the truth. To solve this problem and make those measurements incentive compatible, I will introduce a new market institution, called Bayesian markets.
Imagine we ask people whether they are happy with their life. On Bayesian markets, they will trade an asset whose value is the proportion of people answering Yes. Only those answering Yes will have the right to buy the asset and those answering No the right to sell it. Bayesian updating implies that “Yes” agents predict a higher value of the asset than “No” agents do and, consequently, “Yes” agents want to buy it while “No” agents want to sell it. I will show that truth-telling is then the optimal strategy.
Bayesian markets reward truth-telling the same way as prediction markets (betting markets) reward people for reporting their true subjective probabilities about observable events. Yet, unlike prediction markets, they do not require events to be objectively observable. Bayesian markets apply to any type of unverifiable truths, from one’s own happiness to beliefs about events that will never be observed.
The present research program will first establish the theoretical foundations of Bayesian markets. It will then develop the proper methodology to implement them. Finally, it will disseminate the use of Bayesian markets via applications.
The first application will demonstrate how degrees of expertise can be measured and will apply it to risks related to climate change and nuclear power plants. It will contribute to the political debate by shedding new light on what true experts think about these risks. The second application will provide the first incentivized measures of life satisfaction and happiness.
Summary
Subjective data play an increasing role in modern economics. For instance, new welfare measurements are based on people’s subjective assessments of their happiness or their life satisfaction. A problem of such measurements is that people have no incentives to tell the truth. To solve this problem and make those measurements incentive compatible, I will introduce a new market institution, called Bayesian markets.
Imagine we ask people whether they are happy with their life. On Bayesian markets, they will trade an asset whose value is the proportion of people answering Yes. Only those answering Yes will have the right to buy the asset and those answering No the right to sell it. Bayesian updating implies that “Yes” agents predict a higher value of the asset than “No” agents do and, consequently, “Yes” agents want to buy it while “No” agents want to sell it. I will show that truth-telling is then the optimal strategy.
Bayesian markets reward truth-telling the same way as prediction markets (betting markets) reward people for reporting their true subjective probabilities about observable events. Yet, unlike prediction markets, they do not require events to be objectively observable. Bayesian markets apply to any type of unverifiable truths, from one’s own happiness to beliefs about events that will never be observed.
The present research program will first establish the theoretical foundations of Bayesian markets. It will then develop the proper methodology to implement them. Finally, it will disseminate the use of Bayesian markets via applications.
The first application will demonstrate how degrees of expertise can be measured and will apply it to risks related to climate change and nuclear power plants. It will contribute to the political debate by shedding new light on what true experts think about these risks. The second application will provide the first incentivized measures of life satisfaction and happiness.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-01-01, End date: 2020-12-31
Project acronym BIGCODE
Project Learning from Big Code: Probabilistic Models, Analysis and Synthesis
Researcher (PI) Martin Vechev
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Summary
The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym CAFES
Project Causal Analysis of Feedback Systems
Researcher (PI) Joris Marten Mooij
Host Institution (HI) UNIVERSITEIT VAN AMSTERDAM
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Many questions in science, policy making and everyday life are of a causal nature: how would changing A influence B? Causal inference, a branch of statistics and machine learning, studies how cause-effect relationships can be discovered from data and how these can be used for making predictions in situations where a system has been perturbed by an external intervention. The ability to reliably make such causal predictions is of great value for practical applications in a variety of disciplines. Over the last two decades, remarkable progress has been made in the field. However, even though state-of-the-art causal inference algorithms work well on simulated data when all their assumptions are met, there is still a considerable gap between theory and practice. The goal of CAFES is to bridge that gap by developing theory and algorithms that will enable large-scale applications of causal inference in various challenging domains in science, industry and decision making.
The key challenge that will be addressed is how to deal with cyclic causal relationships ("feedback loops"). Feedback loops are very common in many domains (e.g., biology, economy and climatology), but have mostly been ignored so far in the field. Building on recently established connections between dynamical systems and causal models, CAFES will develop theory and algorithms for causal modeling, reasoning, discovery and prediction for cyclic causal systems. Extensions to stationary and non-stationary processes will be developed to advance the state-of-the-art in causal analysis of time-series data. In order to optimally use available resources, computationally efficient and statistically robust algorithms for causal inference from observational and interventional data in the context of confounders and feedback will be developed. The work will be done with a strong focus on applications in molecular biology, one of the most promising areas for automated causal inference from data.
Summary
Many questions in science, policy making and everyday life are of a causal nature: how would changing A influence B? Causal inference, a branch of statistics and machine learning, studies how cause-effect relationships can be discovered from data and how these can be used for making predictions in situations where a system has been perturbed by an external intervention. The ability to reliably make such causal predictions is of great value for practical applications in a variety of disciplines. Over the last two decades, remarkable progress has been made in the field. However, even though state-of-the-art causal inference algorithms work well on simulated data when all their assumptions are met, there is still a considerable gap between theory and practice. The goal of CAFES is to bridge that gap by developing theory and algorithms that will enable large-scale applications of causal inference in various challenging domains in science, industry and decision making.
The key challenge that will be addressed is how to deal with cyclic causal relationships ("feedback loops"). Feedback loops are very common in many domains (e.g., biology, economy and climatology), but have mostly been ignored so far in the field. Building on recently established connections between dynamical systems and causal models, CAFES will develop theory and algorithms for causal modeling, reasoning, discovery and prediction for cyclic causal systems. Extensions to stationary and non-stationary processes will be developed to advance the state-of-the-art in causal analysis of time-series data. In order to optimally use available resources, computationally efficient and statistically robust algorithms for causal inference from observational and interventional data in the context of confounders and feedback will be developed. The work will be done with a strong focus on applications in molecular biology, one of the most promising areas for automated causal inference from data.
Max ERC Funding
1 405 652 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym CIRCUMVENT
Project Closing in on Runx3 and CXCL4 to open novel avenues for therapeutic intervention in systemic sclerosis
Researcher (PI) Timothy Radstake
Host Institution (HI) UNIVERSITAIR MEDISCH CENTRUM UTRECHT
Call Details Starting Grant (StG), LS6, ERC-2011-StG_20101109
Summary Systemic sclerosis (SSc) is an autoimmune disease that culminates in excessive extra-cellular matrix deposition (fibrosis) in skin and internal organs. SSc is a severe disease in which fibrotic events lead to organ failure such as renal failure, deterioration of lung function and development of pulmonary arterial hypertension (PAH). Together, these disease hallmarks culminate in profound disability and premature death.
Over the past three years several crucial observations by my group changed the landscape of our thinking about the ethiopathogenesis of this disease. First, plasmacytoid dendritic (pDCs) cells were found to be extremely frequent in the circulation of SSc patients (1000-fold) compared with healthy individuals. In addition, we observed that pDCs from SSc patients are largely dedicated to synthesize CXCL4 that was proven to be directly implicated in fibroblast biology and endothelial cell activation, two events recapitulating SSc. Finally, research aimed to decipher the underlying cause of this increased pDCs frequency led to the observation that Runx3, a transcription factor that controls the differentiation of DC subsets, was almost not expressed in pDC of SSc patients. Together, these observations led me to pose the “SSc immune postulate” in which the pathogenesis of SSc is explained by a multi-step process in which Runx3 and CXCL4 play a central role.
The project CIRCUMVENT is designed to provide proof of concept for the role of CXCL4 and RUNX3 in SSc. For this aim we will exploit a unique set of patient material (cell subsets, protein and DNA bank), various recently developed in vitro techniques (siRNA for pDCs, viral over expression of CXCL4/RUNX3) and apply three recently optimised experimental models (CXCL4 subcutaneous pump model, DC specific RUNX3 KO and the SCID/NOD/rag2 KO mice).
The project CIRCUMVENT aims to proof the direct role for Runx3 and CXCL4 that could provide the final step towards the development of novel therapeutic targets
Summary
Systemic sclerosis (SSc) is an autoimmune disease that culminates in excessive extra-cellular matrix deposition (fibrosis) in skin and internal organs. SSc is a severe disease in which fibrotic events lead to organ failure such as renal failure, deterioration of lung function and development of pulmonary arterial hypertension (PAH). Together, these disease hallmarks culminate in profound disability and premature death.
Over the past three years several crucial observations by my group changed the landscape of our thinking about the ethiopathogenesis of this disease. First, plasmacytoid dendritic (pDCs) cells were found to be extremely frequent in the circulation of SSc patients (1000-fold) compared with healthy individuals. In addition, we observed that pDCs from SSc patients are largely dedicated to synthesize CXCL4 that was proven to be directly implicated in fibroblast biology and endothelial cell activation, two events recapitulating SSc. Finally, research aimed to decipher the underlying cause of this increased pDCs frequency led to the observation that Runx3, a transcription factor that controls the differentiation of DC subsets, was almost not expressed in pDC of SSc patients. Together, these observations led me to pose the “SSc immune postulate” in which the pathogenesis of SSc is explained by a multi-step process in which Runx3 and CXCL4 play a central role.
The project CIRCUMVENT is designed to provide proof of concept for the role of CXCL4 and RUNX3 in SSc. For this aim we will exploit a unique set of patient material (cell subsets, protein and DNA bank), various recently developed in vitro techniques (siRNA for pDCs, viral over expression of CXCL4/RUNX3) and apply three recently optimised experimental models (CXCL4 subcutaneous pump model, DC specific RUNX3 KO and the SCID/NOD/rag2 KO mice).
The project CIRCUMVENT aims to proof the direct role for Runx3 and CXCL4 that could provide the final step towards the development of novel therapeutic targets
Max ERC Funding
1 500 000 €
Duration
Start date: 2012-08-01, End date: 2017-07-31
Project acronym COhABIT
Project Consequences of helminth-bacterial interactions
Researcher (PI) Nicola Harris
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), LS6, ERC-2012-StG_20111109
Summary "Throughout evolution both intestinal helminths and commensal bacteria have inhabited our intestines. This ""ménage à trois"" situation is likely to have exerted a strong selective pressure on the development of our metabolic and immune systems. Such pressures remain in developing countries, whilst the eradication of helminths in industrialized countries has shifted this evolutionary balance—possibly underlying the increased development of chronic inflammatory diseases. We hypothesize that helminth-bacterial interactions are a key determinant of healthy homeostasis.
Preliminary findings from our laboratory indicate that helminth infection of mice alters the abundance and diversity of intestinal bacteria and impacts on the availability of immuno-modulatory metabolites; this altered environment correlates with a direct health advantage, protecting against inflammatory diseases such as asthma and rheumatoid arthritis. We intend to validate and extend these data in humans by performing bacterial phlyogenetic and metabolic analysis of stool samples collected from a large cohort of children living in a helminth endemic region of Ecuador. We further propose to test our hypothesis that helminth-bacterial interactions contribute to disease modulation using experimental models of infection and disease. We plan to develop and utilize mouse models to elucidate the mechanisms through which bacterial dysbiosis and helminth infection influence the development of chronic inflammatory diseases. These models will be utilized for germ-free and recolonization experiments, investigating the relative contribution of bacteria versus helminthes to host immunity, co-metabolism and disease modulation.
Taking a trans-disciplinary approach, this research will break new ground in our understanding of the crosstalk and pressures between intestinal helminth infection and commensal bacterial communities, and the implications this has for human health."
Summary
"Throughout evolution both intestinal helminths and commensal bacteria have inhabited our intestines. This ""ménage à trois"" situation is likely to have exerted a strong selective pressure on the development of our metabolic and immune systems. Such pressures remain in developing countries, whilst the eradication of helminths in industrialized countries has shifted this evolutionary balance—possibly underlying the increased development of chronic inflammatory diseases. We hypothesize that helminth-bacterial interactions are a key determinant of healthy homeostasis.
Preliminary findings from our laboratory indicate that helminth infection of mice alters the abundance and diversity of intestinal bacteria and impacts on the availability of immuno-modulatory metabolites; this altered environment correlates with a direct health advantage, protecting against inflammatory diseases such as asthma and rheumatoid arthritis. We intend to validate and extend these data in humans by performing bacterial phlyogenetic and metabolic analysis of stool samples collected from a large cohort of children living in a helminth endemic region of Ecuador. We further propose to test our hypothesis that helminth-bacterial interactions contribute to disease modulation using experimental models of infection and disease. We plan to develop and utilize mouse models to elucidate the mechanisms through which bacterial dysbiosis and helminth infection influence the development of chronic inflammatory diseases. These models will be utilized for germ-free and recolonization experiments, investigating the relative contribution of bacteria versus helminthes to host immunity, co-metabolism and disease modulation.
Taking a trans-disciplinary approach, this research will break new ground in our understanding of the crosstalk and pressures between intestinal helminth infection and commensal bacterial communities, and the implications this has for human health."
Max ERC Funding
1 480 612 €
Duration
Start date: 2013-04-01, End date: 2018-03-31
Project acronym ComBact
Project How complement molecules kill bacteria
Researcher (PI) Suzan Rooijakkers
Host Institution (HI) UNIVERSITAIR MEDISCH CENTRUM UTRECHT
Call Details Starting Grant (StG), LS6, ERC-2014-STG
Summary This proposal aims to provide insight into how bacteria are killed by the complement system, an important part of the host immune response against bacterial infections. Complement is a large protein network in plasma that labels bacteria for phagocytosis and directly kills them via the formation of a pore-forming complex (Membrane Attack Complex (MAC)). Currently we do not understand how complement activation results in bacterial killing. This knowledge gap is mainly caused by the lack of tools to study the enzymes that trigger MAC formation: the C5 convertases.
In my lab, we recently established a novel assay system for C5 convertases that allows us for the first time to study these enzymes under purified conditions. This model, combined with my expertise in microbiology, places my lab in a unique position to understand C5 convertase biology (Aim 1), determine the enzyme's role in MAC functioning (Aim 2) and elucidate how the MAC kills bacteria (Aim 3). Thus, I aim to provide insight into the molecular events necessary for bacterial killing by the complement system.
I will use biochemical, structural and microbiological approaches to elucidate the precise molecular arrangement of C5 convertases in vitro and on bacterial cells. I will generate unique tools to study how C5 convertases regulate MAC insertion into bacterial membranes. Finally, I will engineer fluorescent bacteria and labeled complement proteins to perform advanced microscopy analyses of how MAC kills bacteria.
These insights will lead to fundamental knowledge about the functioning of complement and will create new avenues for blocking the undesired complement activation during systemic infections and acute inflammatory processes. Furthermore this knowledge will improve desired complement activation by therapeutic antibodies and vaccination strategies in infectious diseases. Finally, this work opens up new possibilities to understand how both humans and bacteria regulate complement.
Summary
This proposal aims to provide insight into how bacteria are killed by the complement system, an important part of the host immune response against bacterial infections. Complement is a large protein network in plasma that labels bacteria for phagocytosis and directly kills them via the formation of a pore-forming complex (Membrane Attack Complex (MAC)). Currently we do not understand how complement activation results in bacterial killing. This knowledge gap is mainly caused by the lack of tools to study the enzymes that trigger MAC formation: the C5 convertases.
In my lab, we recently established a novel assay system for C5 convertases that allows us for the first time to study these enzymes under purified conditions. This model, combined with my expertise in microbiology, places my lab in a unique position to understand C5 convertase biology (Aim 1), determine the enzyme's role in MAC functioning (Aim 2) and elucidate how the MAC kills bacteria (Aim 3). Thus, I aim to provide insight into the molecular events necessary for bacterial killing by the complement system.
I will use biochemical, structural and microbiological approaches to elucidate the precise molecular arrangement of C5 convertases in vitro and on bacterial cells. I will generate unique tools to study how C5 convertases regulate MAC insertion into bacterial membranes. Finally, I will engineer fluorescent bacteria and labeled complement proteins to perform advanced microscopy analyses of how MAC kills bacteria.
These insights will lead to fundamental knowledge about the functioning of complement and will create new avenues for blocking the undesired complement activation during systemic infections and acute inflammatory processes. Furthermore this knowledge will improve desired complement activation by therapeutic antibodies and vaccination strategies in infectious diseases. Finally, this work opens up new possibilities to understand how both humans and bacteria regulate complement.
Max ERC Funding
1 497 290 €
Duration
Start date: 2015-03-01, End date: 2020-02-29
Project acronym COMET
Project foundations of COmputational similarity geoMETtry
Researcher (PI) Michael Bronstein
Host Institution (HI) UNIVERSITA DELLA SVIZZERA ITALIANA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Similarity is one of the most fundamental notions encountered in problems practically in every branch of science, and is especially crucial in image sciences such as computer vision and pattern recognition. The need to quantify similarity or dissimilarity of some data is central to broad categories of problems involving comparison, search, matching, alignment, or reconstruction. The most common way to model a similarity is using metrics (distances). Such constructions are well-studied in the field of metric geometry, and there exist numerous computational algorithms allowing, for example, to represent one metric using another by means of isometric embeddings.
However, in many applications such a model appears to be too restrictive: many types of similarity are non-metric; it is not always possible to model the similarity precisely or completely e.g. due to missing data; some objects might be mutually incomparable e.g. if they are coming from different modalities. Such deficiencies of the metric similarity model are especially pronounced in large-scale computer vision, pattern recognition, and medical imaging applications.
The ambitious goal of this project is to introduce a paradigm shift in the way we model and compute similarity. We will develop a unifying framework of computational similarity geometry that extends the theoretical metric model, and will allow developing efficient numerical and computational tools for the representation and computation of generic similarity models. The methods will be developed all the way from mathematical concepts to efficiently implemented code and will be applied to today’s most important and challenging problems in Internet-scale computer vision and pattern recognition, shape analysis, and medical imaging."
Summary
"Similarity is one of the most fundamental notions encountered in problems practically in every branch of science, and is especially crucial in image sciences such as computer vision and pattern recognition. The need to quantify similarity or dissimilarity of some data is central to broad categories of problems involving comparison, search, matching, alignment, or reconstruction. The most common way to model a similarity is using metrics (distances). Such constructions are well-studied in the field of metric geometry, and there exist numerous computational algorithms allowing, for example, to represent one metric using another by means of isometric embeddings.
However, in many applications such a model appears to be too restrictive: many types of similarity are non-metric; it is not always possible to model the similarity precisely or completely e.g. due to missing data; some objects might be mutually incomparable e.g. if they are coming from different modalities. Such deficiencies of the metric similarity model are especially pronounced in large-scale computer vision, pattern recognition, and medical imaging applications.
The ambitious goal of this project is to introduce a paradigm shift in the way we model and compute similarity. We will develop a unifying framework of computational similarity geometry that extends the theoretical metric model, and will allow developing efficient numerical and computational tools for the representation and computation of generic similarity models. The methods will be developed all the way from mathematical concepts to efficiently implemented code and will be applied to today’s most important and challenging problems in Internet-scale computer vision and pattern recognition, shape analysis, and medical imaging."
Max ERC Funding
1 495 020 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym CONTROL
Project Behavioral Foundations of Power and Control
Researcher (PI) Holger HERZ
Host Institution (HI) UNIVERSITE DE FRIBOURG
Call Details Starting Grant (StG), SH1, ERC-2018-STG
Summary Power relations are an integral part of economic organizations, as well as political and social institutions. People exercise power over others – or are exposed to the power of others – in government, in firms, and even in families. People care deeply about power and autonomy, and attitudes towards them have important economic and societal consequences. Examples include such diverse matters as the willingness to delegate power to government, empire building in public organizations, or sorting into more or less autonomous jobs. Despite their importance, we have remarkably little knowledge about preferences for power and autonomy. Clearly, power and autonomy are valued for being instrumental in achieving desirable outcomes, but it has also long been argued that they are valuable for their own sake. Existing value measures of power and autonomy, however, fail to distinguish between intrinsic and instrumental value components. Power distance and autonomy are even considered to be cultural values, but we don’t know whether differences in such measures are rooted in differences in the instrumental value or differences in preferences. We propose a novel revealed preference approach that allows us to address this shortcoming by separately measuring the intrinsic value of power and the intrinsic value of autonomy. We can then apply this method to properly assess heterogeneity in such values within and across cultures. By combining our measures with other data, we will be able to study the importance of such preferences in explaining individual differences, such as occupational choices or expressed political views, as well as economic outcomes across countries, such as the level of decentralization in economic organizations. Finally, we will study how behavioral reactions to power interact with such preferences and organizational structure, in order to better understand how institutions can be efficiently designed when behavioral reactions to power are accounted for.
Summary
Power relations are an integral part of economic organizations, as well as political and social institutions. People exercise power over others – or are exposed to the power of others – in government, in firms, and even in families. People care deeply about power and autonomy, and attitudes towards them have important economic and societal consequences. Examples include such diverse matters as the willingness to delegate power to government, empire building in public organizations, or sorting into more or less autonomous jobs. Despite their importance, we have remarkably little knowledge about preferences for power and autonomy. Clearly, power and autonomy are valued for being instrumental in achieving desirable outcomes, but it has also long been argued that they are valuable for their own sake. Existing value measures of power and autonomy, however, fail to distinguish between intrinsic and instrumental value components. Power distance and autonomy are even considered to be cultural values, but we don’t know whether differences in such measures are rooted in differences in the instrumental value or differences in preferences. We propose a novel revealed preference approach that allows us to address this shortcoming by separately measuring the intrinsic value of power and the intrinsic value of autonomy. We can then apply this method to properly assess heterogeneity in such values within and across cultures. By combining our measures with other data, we will be able to study the importance of such preferences in explaining individual differences, such as occupational choices or expressed political views, as well as economic outcomes across countries, such as the level of decentralization in economic organizations. Finally, we will study how behavioral reactions to power interact with such preferences and organizational structure, in order to better understand how institutions can be efficiently designed when behavioral reactions to power are accounted for.
Max ERC Funding
1 492 785 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym COSYM
Project Computational Symmetry for Geometric Data Analysis and Design
Researcher (PI) Mark Pauly
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary The analysis and synthesis of complex 3D geometric data sets is of crucial importance in many scientific disciplines (e.g. bio-medicine, material science, mechanical engineering, physics) and industrial applications (e.g. drug design, entertainment, architecture). We are currently witnessing a tremendous increase in the size and complexity of geometric data, largely fueled by significant advances in 3D acquisition and digital production technology. However, existing computational tools are often not suited to handle this complexity.
The goal of this project is to explore a fundamentally different way of processing 3D geometry. We will investigate a new generalized model of geometric symmetry as a unifying concept for studying spatial organization in geometric data. This model allows exposing the inherent redundancies in digital 3D data and will enable truly scalable algorithms for analysis, processing, and design of large-scale geometric data sets. The proposed research will address a number of fundamental questions: What is the information content of 3D geometric models? How can we represent, store, and transmit geometric data most efficiently? Can we we use symmetry to repair deficiencies and reduce noise in acquired data? What is the role of symmetry in the design process and how can it be used to reduce complexity?
I will investigate these questions with an integrated approach that combines thorough theoretical studies with practical solutions for real-world applications.
The proposed research has a strong interdisciplinary component and will consider the same fundamental questions from different perspectives, closely interacting with scientists of various disciplines, as well artists, architects, and designers.
Summary
The analysis and synthesis of complex 3D geometric data sets is of crucial importance in many scientific disciplines (e.g. bio-medicine, material science, mechanical engineering, physics) and industrial applications (e.g. drug design, entertainment, architecture). We are currently witnessing a tremendous increase in the size and complexity of geometric data, largely fueled by significant advances in 3D acquisition and digital production technology. However, existing computational tools are often not suited to handle this complexity.
The goal of this project is to explore a fundamentally different way of processing 3D geometry. We will investigate a new generalized model of geometric symmetry as a unifying concept for studying spatial organization in geometric data. This model allows exposing the inherent redundancies in digital 3D data and will enable truly scalable algorithms for analysis, processing, and design of large-scale geometric data sets. The proposed research will address a number of fundamental questions: What is the information content of 3D geometric models? How can we represent, store, and transmit geometric data most efficiently? Can we we use symmetry to repair deficiencies and reduce noise in acquired data? What is the role of symmetry in the design process and how can it be used to reduce complexity?
I will investigate these questions with an integrated approach that combines thorough theoretical studies with practical solutions for real-world applications.
The proposed research has a strong interdisciplinary component and will consider the same fundamental questions from different perspectives, closely interacting with scientists of various disciplines, as well artists, architects, and designers.
Max ERC Funding
1 160 302 €
Duration
Start date: 2011-02-01, End date: 2016-01-31
Project acronym CTLANDROS
Project Reactive Oxygen Species in CTL-mediated Cell Death: from Mechanism to Applications
Researcher (PI) Denis Martinvalet
Host Institution (HI) UNIVERSITE DE GENEVE
Call Details Starting Grant (StG), LS6, ERC-2010-StG_20091118
Summary Cytotoxic T lymphocytes (CTL) and natural killer (NK) cells release granzyme and perforin from cytotoxic granules into the immune synapse to induce apoptosis of target cells that are either virus-infected or cancerous. Granzyme A activates a caspase-independent apoptotic pathway and induces mitochondrial damage characterized by superoxide anion production and loss of the mitochondrial transmembrane potential, without disrupting the integrity of the mitochondrial outer membrane; while causing single-stranded DNA damage. GzmB induces both caspase-dependent and caspase-independent cell death. In the caspase-dependent pathway, mitochondrial functions are altered as evidenced by the loss of mitochondrial transmembrane potential and the generation of reactive oxygen species (ROS). The mitochondrial outer membrane (MOM) is disrupted, resulting in the release of apoptogenic factors. To date, research on mitochondrial-dependent apoptosis has focused on mitochondrial outer membrane permeabilization (MOMP) however whether the generation of ROS is incidental or essential to the execution of apoptosis remains unclear. Like human GzmA, human GzmB promotes cell death in a ROS-dependent manner. Preliminary data suggest that human GzmB can induce ROS in a MOMP-independent manner as Bax and Bak double knockout MEF cells treated with human GzmB and perforin still display a robust ROS production and dye in an ROS-dependent manner. Since GzmA and GzmB induce cell death in a ROS-dependent manner, we hypothesize that oxygen free radicals are central to the execution of programmed cell death induced by the cytotoxic granules. Therefore, the goal of this proposal is to dissect the key molecular events triggered by ROS that lead to Citotoxic Tcell-induced target cell death. A combination of biochemical, genetic and proteomic approaches in association with Electron Spin Resonance (ESR) spectroscopy methodology will be used to unravel the essential role ROS play in CTL-mediated killing.
Summary
Cytotoxic T lymphocytes (CTL) and natural killer (NK) cells release granzyme and perforin from cytotoxic granules into the immune synapse to induce apoptosis of target cells that are either virus-infected or cancerous. Granzyme A activates a caspase-independent apoptotic pathway and induces mitochondrial damage characterized by superoxide anion production and loss of the mitochondrial transmembrane potential, without disrupting the integrity of the mitochondrial outer membrane; while causing single-stranded DNA damage. GzmB induces both caspase-dependent and caspase-independent cell death. In the caspase-dependent pathway, mitochondrial functions are altered as evidenced by the loss of mitochondrial transmembrane potential and the generation of reactive oxygen species (ROS). The mitochondrial outer membrane (MOM) is disrupted, resulting in the release of apoptogenic factors. To date, research on mitochondrial-dependent apoptosis has focused on mitochondrial outer membrane permeabilization (MOMP) however whether the generation of ROS is incidental or essential to the execution of apoptosis remains unclear. Like human GzmA, human GzmB promotes cell death in a ROS-dependent manner. Preliminary data suggest that human GzmB can induce ROS in a MOMP-independent manner as Bax and Bak double knockout MEF cells treated with human GzmB and perforin still display a robust ROS production and dye in an ROS-dependent manner. Since GzmA and GzmB induce cell death in a ROS-dependent manner, we hypothesize that oxygen free radicals are central to the execution of programmed cell death induced by the cytotoxic granules. Therefore, the goal of this proposal is to dissect the key molecular events triggered by ROS that lead to Citotoxic Tcell-induced target cell death. A combination of biochemical, genetic and proteomic approaches in association with Electron Spin Resonance (ESR) spectroscopy methodology will be used to unravel the essential role ROS play in CTL-mediated killing.
Max ERC Funding
1 500 000 €
Duration
Start date: 2011-05-01, End date: 2016-04-30
Project acronym DAPP
Project Data-centric Parallel Programming
Researcher (PI) Torsten Hoefler
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary We address a fundamental and increasingly important challenge in computer science: how to program large-scale heterogeneous parallel computers. Society relies on these computers to satisfy the growing demands of important applications such as drug design, weather prediction, and big data analytics. Architectural trends make heterogeneous parallel processors the fundamental building blocks of computing platforms ranging from quad-core laptops to million-core supercomputers; failing to exploit these architectures efficiently will severely limit the technological advance of our society. Computationally demanding problems are often inherently parallel and can readily be compiled for various target architectures. Yet, efficiently mapping data to the target memory system is notoriously hard, and the cost of fetching two operands from remote memory is already orders of magnitude more expensive than any arithmetic operation. Data access cost is growing with the amount of parallelism which makes data layout optimizations crucial. Prevalent parallel programming abstractions largely ignore data access and guide programmers to design threads of execution that are scheduled to the machine. We depart from this control-centric model to a data-centric program formulation where we express programs as collections of values, called memlets, that are mapped as first-class objects by the compiler and runtime system. Our holistic compiler and runtime system aims to substantially advance the state of the art in parallel computing by combining static and dynamic scheduling of memlets to complex heterogeneous target architectures. We will demonstrate our methods on three challenging real-world applications in scientific computing, data analytics, and graph processing. We strongly believe that, without holistic data-centric programming, the growing complexity and inefficiency of parallel programming will create a scaling wall that will limit our future computational capabilities.
Summary
We address a fundamental and increasingly important challenge in computer science: how to program large-scale heterogeneous parallel computers. Society relies on these computers to satisfy the growing demands of important applications such as drug design, weather prediction, and big data analytics. Architectural trends make heterogeneous parallel processors the fundamental building blocks of computing platforms ranging from quad-core laptops to million-core supercomputers; failing to exploit these architectures efficiently will severely limit the technological advance of our society. Computationally demanding problems are often inherently parallel and can readily be compiled for various target architectures. Yet, efficiently mapping data to the target memory system is notoriously hard, and the cost of fetching two operands from remote memory is already orders of magnitude more expensive than any arithmetic operation. Data access cost is growing with the amount of parallelism which makes data layout optimizations crucial. Prevalent parallel programming abstractions largely ignore data access and guide programmers to design threads of execution that are scheduled to the machine. We depart from this control-centric model to a data-centric program formulation where we express programs as collections of values, called memlets, that are mapped as first-class objects by the compiler and runtime system. Our holistic compiler and runtime system aims to substantially advance the state of the art in parallel computing by combining static and dynamic scheduling of memlets to complex heterogeneous target architectures. We will demonstrate our methods on three challenging real-world applications in scientific computing, data analytics, and graph processing. We strongly believe that, without holistic data-centric programming, the growing complexity and inefficiency of parallel programming will create a scaling wall that will limit our future computational capabilities.
Max ERC Funding
1 499 672 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym DEVTAXNET
Project Tax Evasion in Developing Countries. The Role of Firm Networks
Researcher (PI) Dina Deborah POMERANZ
Host Institution (HI) UNIVERSITAT ZURICH
Call Details Starting Grant (StG), SH1, ERC-2017-STG
Summary Tax evasion leads to billions of Euros of losses in government revenue around the world. This does not only affect public budgets, but can also create large distortions between activities that are fully taxed and others that escape taxation through evasion. These issues are particularly severe in developing countries, where evasion is especially high and governments struggle to raise funds for basic services and infrastructure, while at the same time trying to grow independent of international aid.
It is widely suspected that some of the most common and difficult to detect forms of evasion involve interactions across firm networks. However, due to severe data limitations, the existing literature has mostly considered taxpayers as isolated units. Empirical evidence on tax compliance in firm networks is extremely sparse.
This proposal describes 3 Sub-Projects to fill this gap. They are made possible thanks to access I have obtained -through five years of prior research and policy engagement– to unique datasets from Chile and Ecuador on both the networks of supply chains and of joint ownership structures.
The first Sub-Project focuses on international firm networks. It aims to analyze profit shifting of multinational firms to low tax jurisdictions, exploiting a natural experiment in Chile that strongly increased monitoring of international tax norms.
The second Sub-Project investigates the analogous issue at the intranational level: profit shifting and tax collusion in networks of firms within the same country. Despite much anecdotal evidence, this behavior has received little rigorous empirical scrutiny.
The final Sub-Project is situated at the nexus between international and national firms. It seeks to estimate a novel form of spillovers of FDI: the impact on tax compliance of local trading partners of foreign-owned firms.
DEVTAXNET will provide new insights about the role of firm networks for tax evasion that are valuable to academics and policy makers alike.
Summary
Tax evasion leads to billions of Euros of losses in government revenue around the world. This does not only affect public budgets, but can also create large distortions between activities that are fully taxed and others that escape taxation through evasion. These issues are particularly severe in developing countries, where evasion is especially high and governments struggle to raise funds for basic services and infrastructure, while at the same time trying to grow independent of international aid.
It is widely suspected that some of the most common and difficult to detect forms of evasion involve interactions across firm networks. However, due to severe data limitations, the existing literature has mostly considered taxpayers as isolated units. Empirical evidence on tax compliance in firm networks is extremely sparse.
This proposal describes 3 Sub-Projects to fill this gap. They are made possible thanks to access I have obtained -through five years of prior research and policy engagement– to unique datasets from Chile and Ecuador on both the networks of supply chains and of joint ownership structures.
The first Sub-Project focuses on international firm networks. It aims to analyze profit shifting of multinational firms to low tax jurisdictions, exploiting a natural experiment in Chile that strongly increased monitoring of international tax norms.
The second Sub-Project investigates the analogous issue at the intranational level: profit shifting and tax collusion in networks of firms within the same country. Despite much anecdotal evidence, this behavior has received little rigorous empirical scrutiny.
The final Sub-Project is situated at the nexus between international and national firms. It seeks to estimate a novel form of spillovers of FDI: the impact on tax compliance of local trading partners of foreign-owned firms.
DEVTAXNET will provide new insights about the role of firm networks for tax evasion that are valuable to academics and policy makers alike.
Max ERC Funding
1 288 125 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym DYNAMIC MODELS
Project Solving dynamic models: Theory and Applications
Researcher (PI) Felix Egbert Kübler
Host Institution (HI) UNIVERSITAT ZURICH
Call Details Starting Grant (StG), SH1, ERC-2011-StG_20101124
Summary The computation of equilibria in dynamic stochastic general
equilibrium models with heterogeneous agents has become
increasingly important in macroeconomics and public
finance. For a given example-economy, i.e. a given specification of
preferences, technologies and market-arrangements these methods
compute an (approximate) equilibrium and allow for quantitative
statements about one equilibrium of the example-economy.
Through these so-called 'computational experiments'
many economic insights can be obtained by analyzing
quantitative features of realistically calibrated models.
Unfortunately, economists often use ad hoc computational methods
with poorly understood properties that produce approximate solutions
of unknown quality.
The research-project outlined in this proposal
has three goals: Building theoretical foundations
for analyzing dynamic equilibrium models, developing efficient and stable
algorithms for the computation of equilibria in large scale models and
applying these algorithms to macroeconomic policy analysis.
Summary
The computation of equilibria in dynamic stochastic general
equilibrium models with heterogeneous agents has become
increasingly important in macroeconomics and public
finance. For a given example-economy, i.e. a given specification of
preferences, technologies and market-arrangements these methods
compute an (approximate) equilibrium and allow for quantitative
statements about one equilibrium of the example-economy.
Through these so-called 'computational experiments'
many economic insights can be obtained by analyzing
quantitative features of realistically calibrated models.
Unfortunately, economists often use ad hoc computational methods
with poorly understood properties that produce approximate solutions
of unknown quality.
The research-project outlined in this proposal
has three goals: Building theoretical foundations
for analyzing dynamic equilibrium models, developing efficient and stable
algorithms for the computation of equilibria in large scale models and
applying these algorithms to macroeconomic policy analysis.
Max ERC Funding
1 114 800 €
Duration
Start date: 2011-10-01, End date: 2016-09-30
Project acronym EPOQUE
Project Engineering post-quantum cryptography
Researcher (PI) Peter SCHWABE
Host Institution (HI) STICHTING KATHOLIEKE UNIVERSITEIT
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary "Our digital society critically relies on protection of data and communication against espionage and cyber crime. Underlying all protection mechanisms is cryptography, which we are using
daily to protect, for example, internet communication or e-banking. This protection is threatened by the dawn of universal quantum computers, which will break large parts of the
cryptography in use today. Transitioning current cryptographic algorithms to crypto that resist attacks by large quantum computers, so called ""post-quantum cryptography"", is possibly the
largest challenge applied cryptography is facing since becoming a domain of public research in the second half of the last century. Large standardization bodies, most prominently ETSI and
NIST, have started efforts to evaluate concrete proposals of post-quantum crypto for standardization and deployment. NIST's effort follows in the tradition of successful public ""crypto
competitions"" with strong involvement by the academic cryptographic community. It is expected to run through the next 5 years.
This project will tackle the engineering challenges of post-quantum cryptography following two main research directions. The first direction investigates implementation characteristics of
submissions to NIST for standardization. These include speed on various platforms, code size, and RAM usage. Furthermore we will study so-called side-channel attacks and propose suitable
countermeasures. Side-channel attacks use information such as timing or power consumption of cryptographic devices to obtain secret information. The second direction is about protocol
integration. We will examine how different real-world cryptographic protocols can accommodate the drastically different performance characteristics of post-quantum cryptography, explore
what algorithms suit best the requirements of common usage scenarios of these protocols, and investigate if changes to the high-level protocol layer are advisable to improve overall system
performance."
Summary
"Our digital society critically relies on protection of data and communication against espionage and cyber crime. Underlying all protection mechanisms is cryptography, which we are using
daily to protect, for example, internet communication or e-banking. This protection is threatened by the dawn of universal quantum computers, which will break large parts of the
cryptography in use today. Transitioning current cryptographic algorithms to crypto that resist attacks by large quantum computers, so called ""post-quantum cryptography"", is possibly the
largest challenge applied cryptography is facing since becoming a domain of public research in the second half of the last century. Large standardization bodies, most prominently ETSI and
NIST, have started efforts to evaluate concrete proposals of post-quantum crypto for standardization and deployment. NIST's effort follows in the tradition of successful public ""crypto
competitions"" with strong involvement by the academic cryptographic community. It is expected to run through the next 5 years.
This project will tackle the engineering challenges of post-quantum cryptography following two main research directions. The first direction investigates implementation characteristics of
submissions to NIST for standardization. These include speed on various platforms, code size, and RAM usage. Furthermore we will study so-called side-channel attacks and propose suitable
countermeasures. Side-channel attacks use information such as timing or power consumption of cryptographic devices to obtain secret information. The second direction is about protocol
integration. We will examine how different real-world cryptographic protocols can accommodate the drastically different performance characteristics of post-quantum cryptography, explore
what algorithms suit best the requirements of common usage scenarios of these protocols, and investigate if changes to the high-level protocol layer are advisable to improve overall system
performance."
Max ERC Funding
1 500 000 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym ERINFLAMMATION
Project Inflammatory signals emerging from the endoplasmic reticulum
Researcher (PI) Fabio Martinon
Host Institution (HI) UNIVERSITE DE LAUSANNE
Call Details Starting Grant (StG), LS6, ERC-2011-StG_20101109
Summary The endoplasmic reticulum (ER) serves many general functions, including the facilitation of protein folding and the transport of synthesized proteins, but it also has an important and more specialized role in sensing cellular stress. ER-stress identifies a group of signals that induce a transcriptional program enabling cells to survive protein overload and injury in the ER. This highly coordinated response involves three parallel signaling branches localized at the ER, namely IRE1, ATF6 and PERK.
New findings suggest that these signaling pathways may regulate cellular processes independently of the ER-stress response. We have previously shown that some innate immune receptors such as Toll-like receptors specifically activate the IRE1 signaling pathway to enhance cytokine production. However, this is an emerging field of research and little is known on the specific nature of ER-signaling pathways and their function in regulating pathways in absence of a classical ER-stress response.
The longterm goals of this proposal are to elucidate the molecular mechanisms and pathways emerging from the ER and regulating innate immune responses, and to address the physiological role of specific ER-signaling pathways in inflammation. Three complementary research sub-projects were designed. The first sub-project will identify and characterize compounds and conditions that trigger specific ER-signaling pathways. The second sub-project focuses on the biochemical characterization of signaling pathways emerging from the ER-associated kinases IRE1 and PERK. The third sub-project is aimed at investigating mechanisms by which ER-signaling pathways affect innate immune and inflammatory responses.
The knowledge gained from this study will provide a better understanding of the mechanisms by which the ER and the cytosol interact to orchestrate physiological responses that help the organism to cope with infections and pathogenic insults.
Summary
The endoplasmic reticulum (ER) serves many general functions, including the facilitation of protein folding and the transport of synthesized proteins, but it also has an important and more specialized role in sensing cellular stress. ER-stress identifies a group of signals that induce a transcriptional program enabling cells to survive protein overload and injury in the ER. This highly coordinated response involves three parallel signaling branches localized at the ER, namely IRE1, ATF6 and PERK.
New findings suggest that these signaling pathways may regulate cellular processes independently of the ER-stress response. We have previously shown that some innate immune receptors such as Toll-like receptors specifically activate the IRE1 signaling pathway to enhance cytokine production. However, this is an emerging field of research and little is known on the specific nature of ER-signaling pathways and their function in regulating pathways in absence of a classical ER-stress response.
The longterm goals of this proposal are to elucidate the molecular mechanisms and pathways emerging from the ER and regulating innate immune responses, and to address the physiological role of specific ER-signaling pathways in inflammation. Three complementary research sub-projects were designed. The first sub-project will identify and characterize compounds and conditions that trigger specific ER-signaling pathways. The second sub-project focuses on the biochemical characterization of signaling pathways emerging from the ER-associated kinases IRE1 and PERK. The third sub-project is aimed at investigating mechanisms by which ER-signaling pathways affect innate immune and inflammatory responses.
The knowledge gained from this study will provide a better understanding of the mechanisms by which the ER and the cytosol interact to orchestrate physiological responses that help the organism to cope with infections and pathogenic insults.
Max ERC Funding
1 498 076 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym FLOVIST
Project Flow visualization inspired aero-acoustics with time-resolved Tomographic Particle Image Velocimetry
Researcher (PI) Fulvio Scarano
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary "The recent developments of the Tomographic Particle Image Velocimetry technique and of the non-intrusive pressure field characterization method, by the applicant at TU Delft Aerospace Engineering, now opens unforeseen perspectives in the area of unsteady flow diagnostics and experimental aero-acoustics. As a result of this work it is now possible not only to quantify complex flows in their three-dimensional structure, but also to extract quantities such as pressure. The current research proposal aims at the development of an innovative approach to experimental aero-acoustics and flow control making use of the recently developed Tomographic-PIV technique. The objective is to fully describe and quantify the flow pattern and the related acoustic source term at its origin, which is of paramount importance to understand and control the processes like acoustic noise production and flow separation dominating aerodynamic drag. This is relevant for the improvement of aircrafts design as far as drag reduction and noise emission is related and should enable the development of ""greener"" aircrafts for a sustainable growth of aviation in populated areas, in harmony with the technology innovation policy in Europe (7th Framework Programme) and TU Delft sustainable development focus (CleanEra, Cost-Effective Low emission And Noise Efficient regional Aircraft) at Aerospace Engineering. To achieve this step it is required that such new-generation diagnostic approach by the Tomo-PIV technique is further developed into a quadri-dimensional measurement tool (4D-PIV), enabling to extract the relevant acoustic information from the experimental observation invoking the aeroacoustic analogies. A wide industrial and academic network (DLR, AIRBUS, DNW, NLR, LaVision, EWA, JMBC Burgerscentrum) developed in recent years is available to exploit the results of the proposed activity."
Summary
"The recent developments of the Tomographic Particle Image Velocimetry technique and of the non-intrusive pressure field characterization method, by the applicant at TU Delft Aerospace Engineering, now opens unforeseen perspectives in the area of unsteady flow diagnostics and experimental aero-acoustics. As a result of this work it is now possible not only to quantify complex flows in their three-dimensional structure, but also to extract quantities such as pressure. The current research proposal aims at the development of an innovative approach to experimental aero-acoustics and flow control making use of the recently developed Tomographic-PIV technique. The objective is to fully describe and quantify the flow pattern and the related acoustic source term at its origin, which is of paramount importance to understand and control the processes like acoustic noise production and flow separation dominating aerodynamic drag. This is relevant for the improvement of aircrafts design as far as drag reduction and noise emission is related and should enable the development of ""greener"" aircrafts for a sustainable growth of aviation in populated areas, in harmony with the technology innovation policy in Europe (7th Framework Programme) and TU Delft sustainable development focus (CleanEra, Cost-Effective Low emission And Noise Efficient regional Aircraft) at Aerospace Engineering. To achieve this step it is required that such new-generation diagnostic approach by the Tomo-PIV technique is further developed into a quadri-dimensional measurement tool (4D-PIV), enabling to extract the relevant acoustic information from the experimental observation invoking the aeroacoustic analogies. A wide industrial and academic network (DLR, AIRBUS, DNW, NLR, LaVision, EWA, JMBC Burgerscentrum) developed in recent years is available to exploit the results of the proposed activity."
Max ERC Funding
1 498 000 €
Duration
Start date: 2008-08-01, End date: 2013-07-31