Project acronym ABEP
Project Asset Bubbles and Economic Policy
Researcher (PI) Jaume Ventura Fontanet
Host Institution (HI) Centre de Recerca en Economia Internacional (CREI)
Country Spain
Call Details Advanced Grant (AdG), SH1, ERC-2009-AdG
Summary Advanced capitalist economies experience large and persistent movements in asset prices that are difficult to justify with economic fundamentals. The internet bubble of the 1990s and the real state market bubble of the 2000s are two recent examples. The predominant view is that these bubbles are a market failure, and are caused by some form of individual irrationality on the part of market participants. This project is based instead on the view that market participants are individually rational, although this does not preclude sometimes collectively sub-optimal outcomes. Bubbles are thus not a source of market failure by themselves but instead arise as a result of a pre-existing market failure, namely, the existence of pockets of dynamically inefficient investments. Under some conditions, bubbles partly solve this problem, increasing market efficiency and welfare. It is also possible however that bubbles do not solve the underlying problem and, in addition, create negative side-effects. The main objective of this project is to develop this view of asset bubbles, and produce an empirically-relevant macroeconomic framework that allows us to address the following questions: (i) What is the relationship between bubbles and financial market frictions? Special emphasis is given to how the globalization of financial markets and the development of new financial products affect the size and effects of bubbles. (ii) What is the relationship between bubbles, economic growth and unemployment? The theory suggests the presence of virtuous and vicious cycles, as economic growth creates the conditions for bubbles to pop up, while bubbles create incentives for economic growth to happen. (iii) What is the optimal policy to manage bubbles? We need to develop the tools that allow policy makers to sustain those bubbles that have positive effects and burst those that have negative effects.
Summary
Advanced capitalist economies experience large and persistent movements in asset prices that are difficult to justify with economic fundamentals. The internet bubble of the 1990s and the real state market bubble of the 2000s are two recent examples. The predominant view is that these bubbles are a market failure, and are caused by some form of individual irrationality on the part of market participants. This project is based instead on the view that market participants are individually rational, although this does not preclude sometimes collectively sub-optimal outcomes. Bubbles are thus not a source of market failure by themselves but instead arise as a result of a pre-existing market failure, namely, the existence of pockets of dynamically inefficient investments. Under some conditions, bubbles partly solve this problem, increasing market efficiency and welfare. It is also possible however that bubbles do not solve the underlying problem and, in addition, create negative side-effects. The main objective of this project is to develop this view of asset bubbles, and produce an empirically-relevant macroeconomic framework that allows us to address the following questions: (i) What is the relationship between bubbles and financial market frictions? Special emphasis is given to how the globalization of financial markets and the development of new financial products affect the size and effects of bubbles. (ii) What is the relationship between bubbles, economic growth and unemployment? The theory suggests the presence of virtuous and vicious cycles, as economic growth creates the conditions for bubbles to pop up, while bubbles create incentives for economic growth to happen. (iii) What is the optimal policy to manage bubbles? We need to develop the tools that allow policy makers to sustain those bubbles that have positive effects and burst those that have negative effects.
Max ERC Funding
1 000 000 €
Duration
Start date: 2010-04-01, End date: 2015-03-31
Project acronym ANGEOM
Project Geometric analysis in the Euclidean space
Researcher (PI) Xavier Tolsa Domenech
Host Institution (HI) UNIVERSIDAD AUTONOMA DE BARCELONA
Country Spain
Call Details Advanced Grant (AdG), PE1, ERC-2012-ADG_20120216
Summary "We propose to study different questions in the area of the so called geometric analysis. Most of the topics we are interested in deal with the connection between the behavior of singular integrals and the geometry of sets and measures. The study of this connection has been shown to be extremely helpful in the solution of certain long standing problems in the last years, such as the solution of the Painlev\'e problem or the obtaining of the optimal distortion bounds for quasiconformal mappings by Astala.
More specifically, we would like to study the relationship between the L^2 boundedness of singular integrals associated with Riesz and other related kernels, and rectifiability and other geometric notions. The so called David-Semmes problem is probably the main open problem in this area. Up to now, the techniques used to deal with this problem come from multiscale analysis and involve ideas from Littlewood-Paley theory and quantitative techniques of rectifiability. We propose to apply new ideas that combine variational arguments with other techniques which have connections with mass transportation. Further, we think that it is worth to explore in more detail the connection among mass transportation, singular integrals, and uniform rectifiability.
We are also interested in the field of quasiconformal mappings. We plan to study a problem regarding the quasiconformal distortion of quasicircles. This problem consists in proving that the bounds obtained recently by S. Smirnov on the dimension of K-quasicircles are optimal. We want to apply techniques from quantitative geometric measure theory to deal with this question.
Another question that we intend to explore lies in the interplay of harmonic analysis, geometric measure theory and partial differential equations. This concerns an old problem on the unique continuation of harmonic functions at the boundary open C^1 or Lipschitz domain. All the results known by now deal with smoother Dini domains."
Summary
"We propose to study different questions in the area of the so called geometric analysis. Most of the topics we are interested in deal with the connection between the behavior of singular integrals and the geometry of sets and measures. The study of this connection has been shown to be extremely helpful in the solution of certain long standing problems in the last years, such as the solution of the Painlev\'e problem or the obtaining of the optimal distortion bounds for quasiconformal mappings by Astala.
More specifically, we would like to study the relationship between the L^2 boundedness of singular integrals associated with Riesz and other related kernels, and rectifiability and other geometric notions. The so called David-Semmes problem is probably the main open problem in this area. Up to now, the techniques used to deal with this problem come from multiscale analysis and involve ideas from Littlewood-Paley theory and quantitative techniques of rectifiability. We propose to apply new ideas that combine variational arguments with other techniques which have connections with mass transportation. Further, we think that it is worth to explore in more detail the connection among mass transportation, singular integrals, and uniform rectifiability.
We are also interested in the field of quasiconformal mappings. We plan to study a problem regarding the quasiconformal distortion of quasicircles. This problem consists in proving that the bounds obtained recently by S. Smirnov on the dimension of K-quasicircles are optimal. We want to apply techniques from quantitative geometric measure theory to deal with this question.
Another question that we intend to explore lies in the interplay of harmonic analysis, geometric measure theory and partial differential equations. This concerns an old problem on the unique continuation of harmonic functions at the boundary open C^1 or Lipschitz domain. All the results known by now deal with smoother Dini domains."
Max ERC Funding
1 105 930 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym APMPAL
Project Asset Prices and Macro Policy when Agents Learn
Researcher (PI) Albert Marcet Torrens
Host Institution (HI) FUNDACIÓ MARKETS, ORGANIZATIONS AND VOTES IN ECONOMICS
Country Spain
Call Details Advanced Grant (AdG), SH1, ERC-2012-ADG_20120411
Summary "A conventional assumption in dynamic models is that agents form their expectations in a very sophisticated manner. In particular, that they have Rational Expectations (RE). We develop some tools to relax this assumption while retaining fully optimal behaviour by agents. We study implications for asset pricing and macro policy.
We assume that agents have a consistent set of beliefs that is close, but not equal, to RE. Agents are ""Internally Rational"", that is, they behave rationally given their system of beliefs. Thus, it is conceptually a small deviation from RE. It provides microfoundations for models of adaptive learning, since the learning algorithm is determined by agents’ optimal behaviour. In previous work we have shown that this framework can match stock price and housing price fluctuations, and that policy implications are quite different.
In this project we intend to: i) develop further the foundations of internally rational (IR) learning, ii) apply this to explain observed asset price price behavior, such as stock prices, bond prices, inflation, commodity derivatives, and exchange rates, iii) extend the IR framework to the case when agents entertain various models, iv) optimal policy under IR learning and under private information when some hidden shocks are not revealed ex-post. Along the way we will address policy issues such as: effects of creating derivative markets, sovereign spread as a signal of sovereign default risk, tests of fiscal sustainability, fiscal policy when agents learn, monetary policy (more specifically, QE measures and interest rate policy), and the role of credibility in macro policy."
Summary
"A conventional assumption in dynamic models is that agents form their expectations in a very sophisticated manner. In particular, that they have Rational Expectations (RE). We develop some tools to relax this assumption while retaining fully optimal behaviour by agents. We study implications for asset pricing and macro policy.
We assume that agents have a consistent set of beliefs that is close, but not equal, to RE. Agents are ""Internally Rational"", that is, they behave rationally given their system of beliefs. Thus, it is conceptually a small deviation from RE. It provides microfoundations for models of adaptive learning, since the learning algorithm is determined by agents’ optimal behaviour. In previous work we have shown that this framework can match stock price and housing price fluctuations, and that policy implications are quite different.
In this project we intend to: i) develop further the foundations of internally rational (IR) learning, ii) apply this to explain observed asset price price behavior, such as stock prices, bond prices, inflation, commodity derivatives, and exchange rates, iii) extend the IR framework to the case when agents entertain various models, iv) optimal policy under IR learning and under private information when some hidden shocks are not revealed ex-post. Along the way we will address policy issues such as: effects of creating derivative markets, sovereign spread as a signal of sovereign default risk, tests of fiscal sustainability, fiscal policy when agents learn, monetary policy (more specifically, QE measures and interest rate policy), and the role of credibility in macro policy."
Max ERC Funding
1 970 260 €
Duration
Start date: 2013-06-01, End date: 2018-08-31
Project acronym APMPAL-HET
Project Asset Prices and Macro Policy when Agents Learn and are Heterogeneous
Researcher (PI) Albert MARCET TORRENS
Host Institution (HI) Centre de Recerca en Economia Internacional (CREI)
Country Spain
Call Details Advanced Grant (AdG), SH1, ERC-2017-ADG
Summary Based on the APMPAL (ERC) project we continue to develop the frameworks of internal rationality (IR) and optimal signal extraction (OSE). Under IR investors/consumers behave rationally given their subjective beliefs about prices, these beliefs are compatible with data. Under OSE the government has partial information, it knows how policy influences observed variables and signal extraction.
We develop further the foundations of IR and OSE with an emphasis on heterogeneous agents. We study sovereign bond crisis and heterogeneity of beliefs in asset pricing models under IR, using survey data on expectations. Under IR the assets’ stochastic discount factor depends on the agents’ decision function and beliefs; this modifies some key asset pricing results. We extend OSE to models with state variables, forward-looking constraints and heterogeneity.
Under IR agents’ prior beliefs determine the effects of a policy reform. If the government does not observe prior beliefs it has partial information, thus OSE should be used to analyse policy reforms under IR.
If IR heterogeneous workers forecast their productivity either from their own wage or their neighbours’ in a network, low current wages discourage search and human capital accumulation, leading to low productivity. This can explain low development of a country or social exclusion of a group. Worker subsidies redistribute wealth and can increase productivity if they “teach” agents to exit a low-wage state.
We build DSGE models under IR for prediction and policy analysis. We develop time-series tools for predicting macro and asset market variables, using information available to the analyst, and we introduce non-linearities and survey expectations using insights from models under IR.
We study how IR and OSE change the view on macro policy issues such as tax smoothing, debt management, Taylor rule, level of inflation, fiscal/monetary policy coordination, factor taxation or redistribution.
Summary
Based on the APMPAL (ERC) project we continue to develop the frameworks of internal rationality (IR) and optimal signal extraction (OSE). Under IR investors/consumers behave rationally given their subjective beliefs about prices, these beliefs are compatible with data. Under OSE the government has partial information, it knows how policy influences observed variables and signal extraction.
We develop further the foundations of IR and OSE with an emphasis on heterogeneous agents. We study sovereign bond crisis and heterogeneity of beliefs in asset pricing models under IR, using survey data on expectations. Under IR the assets’ stochastic discount factor depends on the agents’ decision function and beliefs; this modifies some key asset pricing results. We extend OSE to models with state variables, forward-looking constraints and heterogeneity.
Under IR agents’ prior beliefs determine the effects of a policy reform. If the government does not observe prior beliefs it has partial information, thus OSE should be used to analyse policy reforms under IR.
If IR heterogeneous workers forecast their productivity either from their own wage or their neighbours’ in a network, low current wages discourage search and human capital accumulation, leading to low productivity. This can explain low development of a country or social exclusion of a group. Worker subsidies redistribute wealth and can increase productivity if they “teach” agents to exit a low-wage state.
We build DSGE models under IR for prediction and policy analysis. We develop time-series tools for predicting macro and asset market variables, using information available to the analyst, and we introduce non-linearities and survey expectations using insights from models under IR.
We study how IR and OSE change the view on macro policy issues such as tax smoothing, debt management, Taylor rule, level of inflation, fiscal/monetary policy coordination, factor taxation or redistribution.
Max ERC Funding
1 524 144 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym ARISYS
Project Engineering an artificial immune system with functional components assembled from prokaryotic parts and modules
Researcher (PI) VIctor De Lorenzo Prieto
Host Institution (HI) AGENCIA ESTATAL CONSEJO SUPERIOR DEINVESTIGACIONES CIENTIFICAS
Country Spain
Call Details Advanced Grant (AdG), LS9, ERC-2012-ADG_20120314
Summary The objective of this project is to overcome current limitations for antibody production that are inherent to the extant immune system of vertebrates. This will be done by creating an all-in-one artificial/synthetic counterpart based exclusively on prokaryotic parts, devices and modules. To this end, ARISYS will exploit design concepts, construction hierarchies and standardization notions that stem from contemporary Synthetic Biology for the assembly and validation of (what we believe is) the most complex artificial biological system ventured thus far. This all-bacterial immune-like system will not only simplify and make affordable the manipulations necessary for antibody generation, but will also permit the application of such binders by themselves or displayed on bacterial cells to biotechnological challenges well beyond therapeutic and health-related uses. The work plan involves the assembly and validation of autonomous functional modules for [i] displaying antibody/affibody (AB) scaffolds attached to the surface of bacterial cells, [ii] conditional diversification of target-binding sequences of the ABs, [iii] contact-dependent activation of gene expression, [iv] reversible bi-stable switches, and [v] clonal selection and amplification of improved binders. These modules composed of stand-alone parts and bearing well defined input/output functions, will be assembled in the genomic chassis of streamlined Escherichia coli and Pseudomonas putida strains. The resulting molecular network will make the ABs expressed and displayed on the cell surface to proceed spontaneously (or at the user's decision) through subsequent cycles of affinity and specificity maturation towards antigens or other targets presented to the bacterial population. In this way, a single, easy-to-handle (albeit heavily engineered) strain will govern all operations that are typically scattered in a multitude of separate methods and apparatuses for AB production.
Summary
The objective of this project is to overcome current limitations for antibody production that are inherent to the extant immune system of vertebrates. This will be done by creating an all-in-one artificial/synthetic counterpart based exclusively on prokaryotic parts, devices and modules. To this end, ARISYS will exploit design concepts, construction hierarchies and standardization notions that stem from contemporary Synthetic Biology for the assembly and validation of (what we believe is) the most complex artificial biological system ventured thus far. This all-bacterial immune-like system will not only simplify and make affordable the manipulations necessary for antibody generation, but will also permit the application of such binders by themselves or displayed on bacterial cells to biotechnological challenges well beyond therapeutic and health-related uses. The work plan involves the assembly and validation of autonomous functional modules for [i] displaying antibody/affibody (AB) scaffolds attached to the surface of bacterial cells, [ii] conditional diversification of target-binding sequences of the ABs, [iii] contact-dependent activation of gene expression, [iv] reversible bi-stable switches, and [v] clonal selection and amplification of improved binders. These modules composed of stand-alone parts and bearing well defined input/output functions, will be assembled in the genomic chassis of streamlined Escherichia coli and Pseudomonas putida strains. The resulting molecular network will make the ABs expressed and displayed on the cell surface to proceed spontaneously (or at the user's decision) through subsequent cycles of affinity and specificity maturation towards antigens or other targets presented to the bacterial population. In this way, a single, easy-to-handle (albeit heavily engineered) strain will govern all operations that are typically scattered in a multitude of separate methods and apparatuses for AB production.
Max ERC Funding
2 422 271 €
Duration
Start date: 2013-05-01, End date: 2019-04-30
Project acronym BEMOTHER
Project Becoming a mother: An integrative model of adaptations for motherhood during pregnancy and the postpartum period.
Researcher (PI) Oscar VILARROYA
Host Institution (HI) UNIVERSIDAD AUTONOMA DE BARCELONA
Country Spain
Call Details Advanced Grant (AdG), SH4, ERC-2019-ADG
Summary Pregnancy involves biological adaptations that are necessary for the onset, maintenance and regulation of maternal behavior. We were the first group to find (1, 2) that pregnancy is associated with consistent, pronounced and long-lasting reductions in cerebral gray matter (GM) volume in areas of the social-cognition network. The aim of BEMOTHER is to develop an integrative model of the adaptations for motherhood that occur during pregnancy and the postpartum period by: i) establishing when the brain of pregnant women begins to change and how it evolves; ii) characterizing the dynamics of cognitive performance, theory-of-mind, maternal-infant bonding and psychiatric measures; iii) assessing the effect of environmental and/or psychological factors in the maternal adaptations, iv) identifying the metabolomics biomarkers associated with maternal adaptations, and v) integrating the previous findings within the Research Domain Criteria framework (RDoC) (3). We will use a prospective longitudinal design at 5 time points (1 pre-pregnancy session, 2 intra-pregnancy sessions and 2 postpartum sessions) during which neuroimaging, psychological, behavioral and metabolomics data will be acquired in 3 groups of women: a group of nulliparous women who will be undergoing a full-term pregnancy, another group of nulliparous women whose same-sex partners will undergo a full-term pregnancy, and a group of control nulliparous women. We will provide the longitudinal RDoC-based model at the end of the study, but we will also deliver intermediate longitudinal evaluations after the postpartum session, as well as cross-sectional analyses after the first intra-pregnancy session and the postpartum session. BEMOTHER is timely and innovative. It adopts the translational RDoC framework in order to provide a pioneering, comprehensive and dynamic characterization of the adaptations for motherhood, addressing the interaction among different functional domains at different levels of analysis.
Summary
Pregnancy involves biological adaptations that are necessary for the onset, maintenance and regulation of maternal behavior. We were the first group to find (1, 2) that pregnancy is associated with consistent, pronounced and long-lasting reductions in cerebral gray matter (GM) volume in areas of the social-cognition network. The aim of BEMOTHER is to develop an integrative model of the adaptations for motherhood that occur during pregnancy and the postpartum period by: i) establishing when the brain of pregnant women begins to change and how it evolves; ii) characterizing the dynamics of cognitive performance, theory-of-mind, maternal-infant bonding and psychiatric measures; iii) assessing the effect of environmental and/or psychological factors in the maternal adaptations, iv) identifying the metabolomics biomarkers associated with maternal adaptations, and v) integrating the previous findings within the Research Domain Criteria framework (RDoC) (3). We will use a prospective longitudinal design at 5 time points (1 pre-pregnancy session, 2 intra-pregnancy sessions and 2 postpartum sessions) during which neuroimaging, psychological, behavioral and metabolomics data will be acquired in 3 groups of women: a group of nulliparous women who will be undergoing a full-term pregnancy, another group of nulliparous women whose same-sex partners will undergo a full-term pregnancy, and a group of control nulliparous women. We will provide the longitudinal RDoC-based model at the end of the study, but we will also deliver intermediate longitudinal evaluations after the postpartum session, as well as cross-sectional analyses after the first intra-pregnancy session and the postpartum session. BEMOTHER is timely and innovative. It adopts the translational RDoC framework in order to provide a pioneering, comprehensive and dynamic characterization of the adaptations for motherhood, addressing the interaction among different functional domains at different levels of analysis.
Max ERC Funding
2 465 131 €
Duration
Start date: 2020-10-01, End date: 2025-09-30
Project acronym BILITERACY
Project Bi-literacy: Learning to read in L1 and in L2
Researcher (PI) Manuel Francisco Carreiras Valina
Host Institution (HI) BCBL BASQUE CENTER ON COGNITION BRAIN AND LANGUAGE
Country Spain
Call Details Advanced Grant (AdG), SH4, ERC-2011-ADG_20110406
Summary Learning to read is probably one of the most exciting discoveries in our life. Using a longitudinal approach, the research proposed examines how the human brain responds to two major challenges: (a) the instantiation a complex cognitive function for which there is no genetic blueprint (learning to read in a first language, L1), and (b) the accommodation to new statistical regularities when learning to read in a second language (L2). The aim of the present research project is to identify the neural substrates of the reading process and its constituent cognitive components, with specific attention to individual differences and reading disabilities; as well as to investigate the relationship between specific cognitive functions and the changes in neural activity that take place in the course of learning to read in L1 and in L2. The project will employ a longitudinal design. We will recruit children before they learn to read in L1 and in L2 and track reading development with both cognitive and neuroimaging measures over 24 months. The findings from this project will provide a deeper understanding of (a) how general neurocognitive factors and language specific factors underlie individual differences – and reading disabilities– in reading acquisition in L1 and in L2; (b) how the neuro-cognitive circuitry changes and brain mechanisms synchronize while instantiating reading in L1 and in L2; (c) what the limitations and the extent of brain plasticity are in young readers. An interdisciplinary and multi-methodological approach is one of the keys to success of the present project, along with strong theory-driven investigation. By combining both we will generate breakthroughs to advance our understanding of how literacy in L1 and in L2 is acquired and mastered. The research proposed will also lay the foundations for more applied investigations of best practice in teaching reading in first and subsequent languages, and devising intervention methods for reading disabilities.
Summary
Learning to read is probably one of the most exciting discoveries in our life. Using a longitudinal approach, the research proposed examines how the human brain responds to two major challenges: (a) the instantiation a complex cognitive function for which there is no genetic blueprint (learning to read in a first language, L1), and (b) the accommodation to new statistical regularities when learning to read in a second language (L2). The aim of the present research project is to identify the neural substrates of the reading process and its constituent cognitive components, with specific attention to individual differences and reading disabilities; as well as to investigate the relationship between specific cognitive functions and the changes in neural activity that take place in the course of learning to read in L1 and in L2. The project will employ a longitudinal design. We will recruit children before they learn to read in L1 and in L2 and track reading development with both cognitive and neuroimaging measures over 24 months. The findings from this project will provide a deeper understanding of (a) how general neurocognitive factors and language specific factors underlie individual differences – and reading disabilities– in reading acquisition in L1 and in L2; (b) how the neuro-cognitive circuitry changes and brain mechanisms synchronize while instantiating reading in L1 and in L2; (c) what the limitations and the extent of brain plasticity are in young readers. An interdisciplinary and multi-methodological approach is one of the keys to success of the present project, along with strong theory-driven investigation. By combining both we will generate breakthroughs to advance our understanding of how literacy in L1 and in L2 is acquired and mastered. The research proposed will also lay the foundations for more applied investigations of best practice in teaching reading in first and subsequent languages, and devising intervention methods for reading disabilities.
Max ERC Funding
2 487 000 €
Duration
Start date: 2012-05-01, End date: 2017-04-30
Project acronym BIOFORCE
Project Simultaneous multi-pathway engineering in crop plants through combinatorial genetic transformation: Creating nutritionally biofortified cereal grains for food security
Researcher (PI) Paul Christou
Host Institution (HI) UNIVERSIDAD DE LLEIDA
Country Spain
Call Details Advanced Grant (AdG), LS9, ERC-2008-AdG
Summary BIOFORCE has a highly ambitious applied objective: to create transgenic cereal plants that will provide a near-complete micronutrient complement (vitamins A, C, E, folate and essential minerals Ca, Fe, Se and Zn) for malnourished people in the developing world, as well as built-in resistance to insects and parasitic weeds. This in itself represents a striking advance over current efforts to address food insecurity using applied biotechnology in the developing world. We will also address fundamental mechanistic aspects of multi-gene/pathway engineering through transcriptome and metabolome profiling. Fundamental science and applied objectives will be achieved through the application of an exciting novel technology (combinatorial genetic transformation) developed and patented by my research group. This allows the simultaneous transfer of an unlimited number of transgenes into plants followed by library-based selection of plants with appropriate genotypes and phenotypes. All transgenes integrate into one locus ensuring expression stability over multiple generations. This proposal represents a new line of research in my laboratory, founded on incremental advances in the elucidation of transgene integration mechanisms in plants over the past two and a half decades. In addition to scientific issues, BIOFORCE address challenges such as intellectual property, regulatory and biosafety issues and crucially how the fruits of our work will be taken up through philanthropic initiatives in the developing world while creating exploitable opportunities elsewhere. BIOFORCE is comprehensive and it provides a complete package that stands to make an unprecedented contribution to food security in the developing world, while at the same time generating new knowledge to streamline and simplify multiplex gene transfer and the simultaneous modification of multiple complex plant metabolic pathways
Summary
BIOFORCE has a highly ambitious applied objective: to create transgenic cereal plants that will provide a near-complete micronutrient complement (vitamins A, C, E, folate and essential minerals Ca, Fe, Se and Zn) for malnourished people in the developing world, as well as built-in resistance to insects and parasitic weeds. This in itself represents a striking advance over current efforts to address food insecurity using applied biotechnology in the developing world. We will also address fundamental mechanistic aspects of multi-gene/pathway engineering through transcriptome and metabolome profiling. Fundamental science and applied objectives will be achieved through the application of an exciting novel technology (combinatorial genetic transformation) developed and patented by my research group. This allows the simultaneous transfer of an unlimited number of transgenes into plants followed by library-based selection of plants with appropriate genotypes and phenotypes. All transgenes integrate into one locus ensuring expression stability over multiple generations. This proposal represents a new line of research in my laboratory, founded on incremental advances in the elucidation of transgene integration mechanisms in plants over the past two and a half decades. In addition to scientific issues, BIOFORCE address challenges such as intellectual property, regulatory and biosafety issues and crucially how the fruits of our work will be taken up through philanthropic initiatives in the developing world while creating exploitable opportunities elsewhere. BIOFORCE is comprehensive and it provides a complete package that stands to make an unprecedented contribution to food security in the developing world, while at the same time generating new knowledge to streamline and simplify multiplex gene transfer and the simultaneous modification of multiple complex plant metabolic pathways
Max ERC Funding
2 290 046 €
Duration
Start date: 2009-04-01, End date: 2014-03-31
Project acronym BUBPOL
Project Monetary Policy and Asset Price Bubbles
Researcher (PI) Jordi GalI Garreta
Host Institution (HI) Centre de Recerca en Economia Internacional (CREI)
Country Spain
Call Details Advanced Grant (AdG), SH1, ERC-2013-ADG
Summary "The proposed research project seeks to further our understanding on two important questions for the design of monetary policy:
(a) What are the effects of monetary policy interventions on asset price bubbles?
(b) How should monetary policy be conducted in the presence of asset price bubbles?
The first part of the project will focus on the development of a theoretical framework that can be used to analyze rigorously the implications of alternative monetary policy rules in the presence of asset price bubbles, and to characterize the optimal monetary policy. In particular, I plan to use such a framework to assess the merits of a “leaning against the wind” strategy, which calls for a systematic rise in interest rates in response to the development of a bubble.
The second part of the project will seek to produce evidence, both empirical and experimental, regarding the effects of monetary policy on asset price bubbles. The empirical evidence will seek to identify and estimate the sign and response of asset price bubbles to interest rate changes, exploiting the potential differences in the joint behavior of interest rates and asset prices during “bubbly” episodes, in comparison to “normal” times. In addition, I plan to conduct some lab experiments in order to shed some light on the link between monetary policy and bubbles. Participants will trade two assets, a one-period riskless asset and a long-lived stock, in an environment consistent with the existence of asset price bubbles in equilibrium. Monetary policy interventions will take the form of changes in the short-term interest rate, engineered by the experimenter. The experiments will allow us to evaluate some of the predictions of the theoretical models regarding the impact of monetary policy on the dynamics of bubbles, as well as the effectiveness of “leaning against the wind” policies."
Summary
"The proposed research project seeks to further our understanding on two important questions for the design of monetary policy:
(a) What are the effects of monetary policy interventions on asset price bubbles?
(b) How should monetary policy be conducted in the presence of asset price bubbles?
The first part of the project will focus on the development of a theoretical framework that can be used to analyze rigorously the implications of alternative monetary policy rules in the presence of asset price bubbles, and to characterize the optimal monetary policy. In particular, I plan to use such a framework to assess the merits of a “leaning against the wind” strategy, which calls for a systematic rise in interest rates in response to the development of a bubble.
The second part of the project will seek to produce evidence, both empirical and experimental, regarding the effects of monetary policy on asset price bubbles. The empirical evidence will seek to identify and estimate the sign and response of asset price bubbles to interest rate changes, exploiting the potential differences in the joint behavior of interest rates and asset prices during “bubbly” episodes, in comparison to “normal” times. In addition, I plan to conduct some lab experiments in order to shed some light on the link between monetary policy and bubbles. Participants will trade two assets, a one-period riskless asset and a long-lived stock, in an environment consistent with the existence of asset price bubbles in equilibrium. Monetary policy interventions will take the form of changes in the short-term interest rate, engineered by the experimenter. The experiments will allow us to evaluate some of the predictions of the theoretical models regarding the impact of monetary policy on the dynamics of bubbles, as well as the effectiveness of “leaning against the wind” policies."
Max ERC Funding
799 200 €
Duration
Start date: 2014-01-01, End date: 2017-12-31
Project acronym CADENCE
Project Catalytic Dual-Function Devices Against Cancer
Researcher (PI) Jesus Santamaria
Host Institution (HI) UNIVERSIDAD DE ZARAGOZA
Country Spain
Call Details Advanced Grant (AdG), PE8, ERC-2016-ADG
Summary Despite intense research efforts in almost every branch of the natural sciences, cancer continues to be one of the leading causes of death worldwide. It is thus remarkable that little or no therapeutic use has been made of a whole discipline, heterogeneous catalysis, which is noted for its specificity and for enabling chemical reactions in otherwise passive environments. At least in part, this could be attributed to practical difficulties: the selective delivery of a catalyst to a tumour and the remote activation of its catalytic function only after it has reached its target are highly challenging objectives. Only recently, the necessary tools to overcome these problems seem within reach.
CADENCE aims for a breakthrough in cancer therapy by developing a new therapeutic concept. The central hypothesis is that a growing tumour can be treated as a special type of reactor in which reaction conditions can be tailored to achieve two objectives: i) molecules essential to tumour growth are locally depleted and ii) toxic, short-lived products are generated in situ.
To implement this novel approach we will make use of core concepts of reactor engineering (kinetics, heat and mass transfer, catalyst design), as well as of ideas borrowed from other areas, mainly those of bio-orthogonal chemistry and controlled drug delivery. We will explore two different strategies (classical EPR effect and stem cells as Trojan Horses) to deliver optimized catalysts to the tumour. Once the catalysts have reached the tumour they will be remotely activated using near-infrared (NIR) light, that affords the highest penetration into body tissues.
This is an ambitious project, addressing all the key steps from catalyst design to in vivo studies. Given the novel perspective provided by CADENCE, even partial success in any of the approaches to be tested would have a significant impact on the therapeutic toolbox available to treat cancer.
Summary
Despite intense research efforts in almost every branch of the natural sciences, cancer continues to be one of the leading causes of death worldwide. It is thus remarkable that little or no therapeutic use has been made of a whole discipline, heterogeneous catalysis, which is noted for its specificity and for enabling chemical reactions in otherwise passive environments. At least in part, this could be attributed to practical difficulties: the selective delivery of a catalyst to a tumour and the remote activation of its catalytic function only after it has reached its target are highly challenging objectives. Only recently, the necessary tools to overcome these problems seem within reach.
CADENCE aims for a breakthrough in cancer therapy by developing a new therapeutic concept. The central hypothesis is that a growing tumour can be treated as a special type of reactor in which reaction conditions can be tailored to achieve two objectives: i) molecules essential to tumour growth are locally depleted and ii) toxic, short-lived products are generated in situ.
To implement this novel approach we will make use of core concepts of reactor engineering (kinetics, heat and mass transfer, catalyst design), as well as of ideas borrowed from other areas, mainly those of bio-orthogonal chemistry and controlled drug delivery. We will explore two different strategies (classical EPR effect and stem cells as Trojan Horses) to deliver optimized catalysts to the tumour. Once the catalysts have reached the tumour they will be remotely activated using near-infrared (NIR) light, that affords the highest penetration into body tissues.
This is an ambitious project, addressing all the key steps from catalyst design to in vivo studies. Given the novel perspective provided by CADENCE, even partial success in any of the approaches to be tested would have a significant impact on the therapeutic toolbox available to treat cancer.
Max ERC Funding
2 483 136 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym CDAC
Project "The role of consciousness in adaptive behavior: A combined empirical, computational and robot based approach"
Researcher (PI) Paulus Franciscus Maria Joseph Verschure
Host Institution (HI) UNIVERSIDAD POMPEU FABRA
Country Spain
Call Details Advanced Grant (AdG), SH4, ERC-2013-ADG
Summary "Understanding the nature of consciousness is one of the grand outstanding scientific challenges and two of its features stand out: consciousness is defined as the construction of one coherent scene but this scene is experienced with a delay relative to the action of the agent and not necessarily the cause of actions and thoughts. Did evolution render solutions to the challenge of survival that includes epiphenomenal processes? The Conscious Distributed Adaptive Control (CDAC) project aims at resolving this paradox by using a multi-disciplinary approach to show the functional role of consciousness in adaptive behaviour, to identify its underlying neuronal principles and to construct a neuromorphic robot based real-time conscious architecture. CDAC proposes that the shift from surviving in a physical world to one that is dominated by intentional agents requires radically different control architectures combining parallel and distributed control loops to assure real-time operation together with a second level of control that assures coherence through sequential coherent representation of self and the task domain, i.e. consciousness. This conscious scene is driving dedicated credit assignment and planning beyond the immediately given information. CDAC advances a comprehensive framework progressing beyond the state of the art and will be realized using system level models of a conscious architecture, detailed computational studies of its underlying neuronal substrate focusing, empirical validation with a humanoid robot and stroke patients and the advancement of beyond state of the art tools appropriate to the complexity of its objectives. The CDAC project directly addresses one of the main outstanding questions in science: the function and genesis of consciousness and will advance our understanding of mind and brain, provide radically new neurorehabilitation technologies and contribute to realizing a new generation of robots with advanced social competence."
Summary
"Understanding the nature of consciousness is one of the grand outstanding scientific challenges and two of its features stand out: consciousness is defined as the construction of one coherent scene but this scene is experienced with a delay relative to the action of the agent and not necessarily the cause of actions and thoughts. Did evolution render solutions to the challenge of survival that includes epiphenomenal processes? The Conscious Distributed Adaptive Control (CDAC) project aims at resolving this paradox by using a multi-disciplinary approach to show the functional role of consciousness in adaptive behaviour, to identify its underlying neuronal principles and to construct a neuromorphic robot based real-time conscious architecture. CDAC proposes that the shift from surviving in a physical world to one that is dominated by intentional agents requires radically different control architectures combining parallel and distributed control loops to assure real-time operation together with a second level of control that assures coherence through sequential coherent representation of self and the task domain, i.e. consciousness. This conscious scene is driving dedicated credit assignment and planning beyond the immediately given information. CDAC advances a comprehensive framework progressing beyond the state of the art and will be realized using system level models of a conscious architecture, detailed computational studies of its underlying neuronal substrate focusing, empirical validation with a humanoid robot and stroke patients and the advancement of beyond state of the art tools appropriate to the complexity of its objectives. The CDAC project directly addresses one of the main outstanding questions in science: the function and genesis of consciousness and will advance our understanding of mind and brain, provide radically new neurorehabilitation technologies and contribute to realizing a new generation of robots with advanced social competence."
Max ERC Funding
2 469 268 €
Duration
Start date: 2014-02-01, End date: 2019-01-31
Project acronym CELLDOCTOR
Project Quantitative understanding of a living system and its engineering as a cellular organelle
Researcher (PI) Luis Serrano
Host Institution (HI) FUNDACIO CENTRE DE REGULACIO GENOMICA
Country Spain
Call Details Advanced Grant (AdG), LS2, ERC-2008-AdG
Summary The idea of harnessing living organisms for treating human diseases is not new but, so far, the majority of the living vectors used in human therapy are viruses which have the disadvantage of the limited number of genes and networks that can contain. Bacteria allow the cloning of complex networks and the possibility of making a large plethora of compounds, naturally or through careful redesign. One of the main limitations for the use of bacteria to treat human diseases is their complexity, the existence of a cell wall that difficult the communication with the target cells, the lack of control over its growth and the immune response that will elicit on its target. Ideally one would like to have a very small bacterium (of a mitochondria size), with no cell wall, which could be grown in Vitro, be genetically manipulated, for which we will have enough data to allow a complete understanding of its behaviour and which could live as a human cell parasite. Such a microorganism could in principle be used as a living vector in which genes of interests, or networks producing organic molecules of medical relevance, could be introduced under in Vitro conditions and then inoculated on extracted human cells or in the organism, and then become a new organelle in the host. Then, it could produce and secrete into the host proteins which will be needed to correct a genetic disease, or drugs needed by the patient. To do that, we need to understand in excruciating detail the Biology of the target bacterium and how to interface with the host cell cycle (Systems biology aspect). Then we need to have engineering tools (network design, protein design, simulations) to modify the target bacterium to behave like an organelle once inside the cell (Synthetic biology aspect). M.pneumoniae could be such a bacterium. It is one of the smallest free-living bacterium known (680 genes), has no cell wall, can be cultivated in Vitro, can be genetically manipulated and can enter inside human cells.
Summary
The idea of harnessing living organisms for treating human diseases is not new but, so far, the majority of the living vectors used in human therapy are viruses which have the disadvantage of the limited number of genes and networks that can contain. Bacteria allow the cloning of complex networks and the possibility of making a large plethora of compounds, naturally or through careful redesign. One of the main limitations for the use of bacteria to treat human diseases is their complexity, the existence of a cell wall that difficult the communication with the target cells, the lack of control over its growth and the immune response that will elicit on its target. Ideally one would like to have a very small bacterium (of a mitochondria size), with no cell wall, which could be grown in Vitro, be genetically manipulated, for which we will have enough data to allow a complete understanding of its behaviour and which could live as a human cell parasite. Such a microorganism could in principle be used as a living vector in which genes of interests, or networks producing organic molecules of medical relevance, could be introduced under in Vitro conditions and then inoculated on extracted human cells or in the organism, and then become a new organelle in the host. Then, it could produce and secrete into the host proteins which will be needed to correct a genetic disease, or drugs needed by the patient. To do that, we need to understand in excruciating detail the Biology of the target bacterium and how to interface with the host cell cycle (Systems biology aspect). Then we need to have engineering tools (network design, protein design, simulations) to modify the target bacterium to behave like an organelle once inside the cell (Synthetic biology aspect). M.pneumoniae could be such a bacterium. It is one of the smallest free-living bacterium known (680 genes), has no cell wall, can be cultivated in Vitro, can be genetically manipulated and can enter inside human cells.
Max ERC Funding
2 400 000 €
Duration
Start date: 2009-03-01, End date: 2015-02-28
Project acronym CERQUTE
Project Certification of quantum technologies
Researcher (PI) Antonio AcIn
Host Institution (HI) FUNDACIO INSTITUT DE CIENCIES FOTONIQUES
Country Spain
Call Details Advanced Grant (AdG), PE2, ERC-2018-ADG
Summary Given a quantum system, how can one ensure that it (i) is entangled? (ii) random? (iii) secure? (iv) performs a computation correctly? The concept of quantum certification embraces all these questions and CERQUTE’s main goal is to provide the tools to achieve such certification. The need of a new paradigm for quantum certification has emerged as a consequence of the impressive advances on the control of quantum systems. On the one hand, complex many-body quantum systems are prepared in many labs worldwide. On the other hand, quantum information technologies are making the transition to real applications. Quantum certification is a highly transversal concept that covers a broad range of scenarios –from many-body systems to protocols employing few devices– and questions –from theoretical results and experimental demonstrations to commercial products–. CERQUTE is organized along three research lines that reflect this broadness and inter-disciplinary character: (A) many-body quantum systems: the objective is to provide the tools to identify quantum properties of many-body quantum systems; (B) quantum networks: the objective is to characterize networks in the quantum regime; (C) quantum cryptographic protocols: the objective is to construct cryptography protocols offering certified security. Crucial to achieve these objectives is the development of radically new methods to deal with quantum systems in an efficient way. Expected outcomes are: (i) new methods to detect quantum phenomena in the many-body regime, (ii) new protocols to benchmark quantum simulators and annealers, (iii) first methods to characterize quantum causality, (iv) new protocols exploiting simple network geometries (v) experimentally-friendly cryptographic protocols offering certified security. CERQUTE goes at the heart of the fundamental question of what distinguishes quantum from classical physics and will provide the concepts and protocols for the certification of quantum phenomena and technologies.
Summary
Given a quantum system, how can one ensure that it (i) is entangled? (ii) random? (iii) secure? (iv) performs a computation correctly? The concept of quantum certification embraces all these questions and CERQUTE’s main goal is to provide the tools to achieve such certification. The need of a new paradigm for quantum certification has emerged as a consequence of the impressive advances on the control of quantum systems. On the one hand, complex many-body quantum systems are prepared in many labs worldwide. On the other hand, quantum information technologies are making the transition to real applications. Quantum certification is a highly transversal concept that covers a broad range of scenarios –from many-body systems to protocols employing few devices– and questions –from theoretical results and experimental demonstrations to commercial products–. CERQUTE is organized along three research lines that reflect this broadness and inter-disciplinary character: (A) many-body quantum systems: the objective is to provide the tools to identify quantum properties of many-body quantum systems; (B) quantum networks: the objective is to characterize networks in the quantum regime; (C) quantum cryptographic protocols: the objective is to construct cryptography protocols offering certified security. Crucial to achieve these objectives is the development of radically new methods to deal with quantum systems in an efficient way. Expected outcomes are: (i) new methods to detect quantum phenomena in the many-body regime, (ii) new protocols to benchmark quantum simulators and annealers, (iii) first methods to characterize quantum causality, (iv) new protocols exploiting simple network geometries (v) experimentally-friendly cryptographic protocols offering certified security. CERQUTE goes at the heart of the fundamental question of what distinguishes quantum from classical physics and will provide the concepts and protocols for the certification of quantum phenomena and technologies.
Max ERC Funding
1 735 044 €
Duration
Start date: 2020-01-01, End date: 2024-12-31
Project acronym CLOTHILDE
Project CLOTH manIpulation Learning from DEmonstrations
Researcher (PI) Carmen TORRAS GENIS
Host Institution (HI) AGENCIA ESTATAL CONSEJO SUPERIOR DEINVESTIGACIONES CIENTIFICAS
Country Spain
Call Details Advanced Grant (AdG), PE7, ERC-2016-ADG
Summary Textile objects pervade human environments and their versatile manipulation by robots would open up a whole range of possibilities, from increasing the autonomy of elderly and disabled people, housekeeping and hospital logistics, to novel automation in the clothing internet business and upholstered product manufacturing. Although efficient procedures exist for the robotic handling of rigid objects and the virtual rendering of deformable objects, cloth manipulation in the real world has proven elusive, because the vast number of degrees of freedom involved in non-rigid deformations leads to unbearable uncertainties in perception and action outcomes.
This proposal aims at developing a theory of cloth manipulation and carrying it all the way down to prototype implementation in our Lab. By combining powerful recent tools from computational topology and machine learning, we plan to characterize the state of textile objects and their transformations under given actions in a compact operational way (i.e., encoding task-relevant topological changes), which would permit probabilistic planning of actions (first one handed, then bimanual) that ensure reaching a desired cloth configuration despite noisy perceptions and inaccurate actions.
In our approach, the robot will learn manipulation skills from an initial human demonstration, subsequently refined through reinforcement learning, plus occasional requests for user advice. The skills will be encoded as parameterised dynamical systems, and safe interaction with humans will be guaranteed by using a predictive controller based on a model of the robot dynamics. Prototypes will be developed for 3 envisaged applications: recognizing and folding clothes, putting an elastic cover on a mattress or a car seat, and helping elderly and disabled people to dress. The broad Robotics and AI background of the PI and the project narrow focus on clothing seem most appropriate to obtain a breakthrough in this hard fundamental research topic.
Summary
Textile objects pervade human environments and their versatile manipulation by robots would open up a whole range of possibilities, from increasing the autonomy of elderly and disabled people, housekeeping and hospital logistics, to novel automation in the clothing internet business and upholstered product manufacturing. Although efficient procedures exist for the robotic handling of rigid objects and the virtual rendering of deformable objects, cloth manipulation in the real world has proven elusive, because the vast number of degrees of freedom involved in non-rigid deformations leads to unbearable uncertainties in perception and action outcomes.
This proposal aims at developing a theory of cloth manipulation and carrying it all the way down to prototype implementation in our Lab. By combining powerful recent tools from computational topology and machine learning, we plan to characterize the state of textile objects and their transformations under given actions in a compact operational way (i.e., encoding task-relevant topological changes), which would permit probabilistic planning of actions (first one handed, then bimanual) that ensure reaching a desired cloth configuration despite noisy perceptions and inaccurate actions.
In our approach, the robot will learn manipulation skills from an initial human demonstration, subsequently refined through reinforcement learning, plus occasional requests for user advice. The skills will be encoded as parameterised dynamical systems, and safe interaction with humans will be guaranteed by using a predictive controller based on a model of the robot dynamics. Prototypes will be developed for 3 envisaged applications: recognizing and folding clothes, putting an elastic cover on a mattress or a car seat, and helping elderly and disabled people to dress. The broad Robotics and AI background of the PI and the project narrow focus on clothing seem most appropriate to obtain a breakthrough in this hard fundamental research topic.
Max ERC Funding
2 499 149 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym CoCoUnit
Project CoCoUnit: An Energy-Efficient Processing Unit for Cognitive Computing
Researcher (PI) Antonio Maria Gonzalez Colas
Host Institution (HI) UNIVERSITAT POLITECNICA DE CATALUNYA
Country Spain
Call Details Advanced Grant (AdG), PE6, ERC-2018-ADG
Summary There is a fast-growing interest in extending the capabilities of computing systems to perform human-like tasks in an intelligent way. These technologies are usually referred to as cognitive computing. We envision a next revolution in computing in the forthcoming years that will be driven by deploying many “intelligent” devices around us in all kind of environments (work, entertainment, transportation, health care, etc.) backed up by “intelligent” servers in the cloud. These cognitive computing systems will provide new user experiences by delivering new services or improving the operational efficiency of existing ones, and altogether will enrich our lives and our economy.
A key characteristic of cognitive computing systems will be their capability to process in real time large amounts of data coming from audio and vision devices, and other type of sensors. This will demand a very high computing power but at the same time an extremely low energy consumption. This very challenging energy-efficiency requirement is a sine qua non to success not only for mobile and wearable systems, where power dissipation and cost budgets are very low, but also for large data centers where energy consumption is a main component of the total cost of ownership.
Current processor architectures (including general-purpose cores and GPUs) are not a good fit for this type of systems since they keep the same basic organization as early computers, which were mainly optimized for “number crunching”. CoCoUnit will take a disruptive direction by investigating unconventional architectures that can offer orders of magnitude better efficiency in terms of performance per energy and cost for cognitive computing tasks. The ultimate goal of this project is to devise a novel processing unit that will be integrated with the existing units of a processor (general-purpose cores and GPUs) and altogether will be able to deliver cognitive computing user experiences with extremely high energy-efficiency.
Summary
There is a fast-growing interest in extending the capabilities of computing systems to perform human-like tasks in an intelligent way. These technologies are usually referred to as cognitive computing. We envision a next revolution in computing in the forthcoming years that will be driven by deploying many “intelligent” devices around us in all kind of environments (work, entertainment, transportation, health care, etc.) backed up by “intelligent” servers in the cloud. These cognitive computing systems will provide new user experiences by delivering new services or improving the operational efficiency of existing ones, and altogether will enrich our lives and our economy.
A key characteristic of cognitive computing systems will be their capability to process in real time large amounts of data coming from audio and vision devices, and other type of sensors. This will demand a very high computing power but at the same time an extremely low energy consumption. This very challenging energy-efficiency requirement is a sine qua non to success not only for mobile and wearable systems, where power dissipation and cost budgets are very low, but also for large data centers where energy consumption is a main component of the total cost of ownership.
Current processor architectures (including general-purpose cores and GPUs) are not a good fit for this type of systems since they keep the same basic organization as early computers, which were mainly optimized for “number crunching”. CoCoUnit will take a disruptive direction by investigating unconventional architectures that can offer orders of magnitude better efficiency in terms of performance per energy and cost for cognitive computing tasks. The ultimate goal of this project is to devise a novel processing unit that will be integrated with the existing units of a processor (general-purpose cores and GPUs) and altogether will be able to deliver cognitive computing user experiences with extremely high energy-efficiency.
Max ERC Funding
2 498 661 €
Duration
Start date: 2019-09-01, End date: 2025-02-28
Project acronym COMP-DES-MAT
Project Advanced tools for computational design of engineering materials
Researcher (PI) Francisco Javier (Xavier) Oliver Olivella
Host Institution (HI) CENTRE INTERNACIONAL DE METODES NUMERICS EN ENGINYERIA
Country Spain
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary The overall goal of the project is to contribute to the consolidation of the nascent and revolutionary philosophy of “Materials by Design” by resorting to the enormous power provided by the nowadays-available computational techniques. Limitations of current procedures for developing material-based innovative technologies in engineering, are often made manifest; many times only a catalog, or a data basis, of materials is available and these new technologies have to adapt to them, in the same way that the users of ready-to-wear have to take from the shop the costume that fits them better, but not the one that fits them properly. This constitutes an enormous limitation for the intended goals and scope. Certainly, availability of materials specifically designed by goal-oriented methods could eradicate that limitation, but this purpose faces the bounds of experimental procedures of material design, commonly based on trial and error procedures.
Computational mechanics, with the emerging Computational Materials Design (CMD) research field, has much to offer in this respect. The increasing power of the new computer processors and, most importantly, development of new methods and strategies of computational simulation, opens new ways to face the problem. The project intends breaking through the barriers that presently hinder the development and application of computational materials design, by means of the synergic exploration and development of three supplementary families of methods: 1) computational multiscale material modeling (CMM) based on the bottom-up, one-way coupled, description of the material structure in different representative scales, 2) development of a new generation of high performance reduced-order-modeling techniques (HP-ROM), in order to bring down the associated computational costs to affordable levels, and 3) new computational strategies and methods for the optimal design of the material meso/micro structure arrangement and topology (MATO) .
Summary
The overall goal of the project is to contribute to the consolidation of the nascent and revolutionary philosophy of “Materials by Design” by resorting to the enormous power provided by the nowadays-available computational techniques. Limitations of current procedures for developing material-based innovative technologies in engineering, are often made manifest; many times only a catalog, or a data basis, of materials is available and these new technologies have to adapt to them, in the same way that the users of ready-to-wear have to take from the shop the costume that fits them better, but not the one that fits them properly. This constitutes an enormous limitation for the intended goals and scope. Certainly, availability of materials specifically designed by goal-oriented methods could eradicate that limitation, but this purpose faces the bounds of experimental procedures of material design, commonly based on trial and error procedures.
Computational mechanics, with the emerging Computational Materials Design (CMD) research field, has much to offer in this respect. The increasing power of the new computer processors and, most importantly, development of new methods and strategies of computational simulation, opens new ways to face the problem. The project intends breaking through the barriers that presently hinder the development and application of computational materials design, by means of the synergic exploration and development of three supplementary families of methods: 1) computational multiscale material modeling (CMM) based on the bottom-up, one-way coupled, description of the material structure in different representative scales, 2) development of a new generation of high performance reduced-order-modeling techniques (HP-ROM), in order to bring down the associated computational costs to affordable levels, and 3) new computational strategies and methods for the optimal design of the material meso/micro structure arrangement and topology (MATO) .
Max ERC Funding
2 372 973 €
Duration
Start date: 2013-02-01, End date: 2018-01-31
Project acronym COMPMUSIC
Project Computational models for the discovery of the world's music
Researcher (PI) Francesc Xavier Serra Casals
Host Institution (HI) UNIVERSIDAD POMPEU FABRA
Country Spain
Call Details Advanced Grant (AdG), PE6, ERC-2010-AdG_20100224
Summary Current IT research does not respond to the world's multi-cultural reality. It could be argued that we are imposing the paradigms of our market-driven western culture also on IT and that current IT research results will only facilitate the access of a small part of the world’s information to a small part of the world's population. Most IT research is being carried out with a western centred approach and as a result, our data models, cognition models, user models, interaction models, ontologies, … are all culturally biased. This fact is quite evident in music information research, since, despite the world's richness in musical cultures, most of the research is centred on CDs and metadata of our western commercial music. CompMusic wants to break this huge research bias. By approaching musical information modelling from a multicultural perspective it aims at advancing our state of the art while facilitating the discovery and reuse of the music produced outside the western commercial context. But the development of computational models to address the world’s music information richness cannot be done from the West looking out; we have to involve researchers and musical experts immersed in the different cultures. Their contribution is fundamental to develop the appropriate multicultural musicological and cognitive frameworks from which we should then carry our research on finding appropriate musical features, ontologies, data representations, user interfaces and user centred approaches. CompMusic will investigate some of the most consolidated non-western classical music traditions, Indian (hindustani, carnatic), Turkish-Arab (ottoman, andalusian), and Chinese (han), developing the needed computational models to bring their music into the current globalized information framework. Using these music cultures as case studies, cultures that are alive and have a strong influence in current society, we can develop rich information models that can take advantage of the existing information coming from musicological and cultural studies, from mature performance practice traditions and from active social contexts. With this approach we aim at challenging the current western centred information paradigms, advance our IT research, and contribute to our rich multicultural society.
Summary
Current IT research does not respond to the world's multi-cultural reality. It could be argued that we are imposing the paradigms of our market-driven western culture also on IT and that current IT research results will only facilitate the access of a small part of the world’s information to a small part of the world's population. Most IT research is being carried out with a western centred approach and as a result, our data models, cognition models, user models, interaction models, ontologies, … are all culturally biased. This fact is quite evident in music information research, since, despite the world's richness in musical cultures, most of the research is centred on CDs and metadata of our western commercial music. CompMusic wants to break this huge research bias. By approaching musical information modelling from a multicultural perspective it aims at advancing our state of the art while facilitating the discovery and reuse of the music produced outside the western commercial context. But the development of computational models to address the world’s music information richness cannot be done from the West looking out; we have to involve researchers and musical experts immersed in the different cultures. Their contribution is fundamental to develop the appropriate multicultural musicological and cognitive frameworks from which we should then carry our research on finding appropriate musical features, ontologies, data representations, user interfaces and user centred approaches. CompMusic will investigate some of the most consolidated non-western classical music traditions, Indian (hindustani, carnatic), Turkish-Arab (ottoman, andalusian), and Chinese (han), developing the needed computational models to bring their music into the current globalized information framework. Using these music cultures as case studies, cultures that are alive and have a strong influence in current society, we can develop rich information models that can take advantage of the existing information coming from musicological and cultural studies, from mature performance practice traditions and from active social contexts. With this approach we aim at challenging the current western centred information paradigms, advance our IT research, and contribute to our rich multicultural society.
Max ERC Funding
2 443 200 €
Duration
Start date: 2011-07-01, End date: 2017-06-30
Project acronym COTURB
Project Coherent Structures in Wall-bounded Turbulence
Researcher (PI) Javier Jimenez SendIn
Host Institution (HI) UNIVERSIDAD POLITECNICA DE MADRID
Country Spain
Call Details Advanced Grant (AdG), PE8, ERC-2014-ADG
Summary Turbulence is a multiscale phenomenon for which control efforts have often failed because the dimension of the attractor is large. However, kinetic energy and drag are controlled by relatively few slowly evolving large structures that sit on top of a multiscale cascade of smaller eddies. They are essentially single-scale phenomena whose evolution can be described using less information than for the full flow. In evolutionary terms they are punctuated ‘equilibria’ for which chaotic evolution is only intermittent. The rest of the time they can be considered coherent and predictable for relatively long periods. Coherent structures studied in the 1970s in free-shear flows (e.g. jets) eventually led to increased understanding and to industrial applications. In wall-bounded cases (e.g. boundary layers), proposed structures range from exact permanent waves and orbits to qualitative observations such as hairpins or ejections. Although most of them have been described at low Reynolds numbers, there are reasons to believe that they persist at higher ones in the ‘LES’ sense in which small scales are treated statistically. Recent computational and experimental advances provide enough temporally and spatially resolved data to quantify the relevance of such models to fully developed flows. We propose to use mostly existing numerical data bases to test the various models of wall-bounded coherent structures, to quantify how often and how closely the flow approaches them, and to develop moderate-time predictions. Existing solutions will be extended to the LES equations, methods will be sought to identify them in fully turbulent flows, and reduced-order models will be developed and tested. In practical situations, the idea is to be able to detect large eddies and to predict them ‘most of the time’. If simple enough models are found, the process will be implemented in the laboratory and used to suggest control strategies.
Summary
Turbulence is a multiscale phenomenon for which control efforts have often failed because the dimension of the attractor is large. However, kinetic energy and drag are controlled by relatively few slowly evolving large structures that sit on top of a multiscale cascade of smaller eddies. They are essentially single-scale phenomena whose evolution can be described using less information than for the full flow. In evolutionary terms they are punctuated ‘equilibria’ for which chaotic evolution is only intermittent. The rest of the time they can be considered coherent and predictable for relatively long periods. Coherent structures studied in the 1970s in free-shear flows (e.g. jets) eventually led to increased understanding and to industrial applications. In wall-bounded cases (e.g. boundary layers), proposed structures range from exact permanent waves and orbits to qualitative observations such as hairpins or ejections. Although most of them have been described at low Reynolds numbers, there are reasons to believe that they persist at higher ones in the ‘LES’ sense in which small scales are treated statistically. Recent computational and experimental advances provide enough temporally and spatially resolved data to quantify the relevance of such models to fully developed flows. We propose to use mostly existing numerical data bases to test the various models of wall-bounded coherent structures, to quantify how often and how closely the flow approaches them, and to develop moderate-time predictions. Existing solutions will be extended to the LES equations, methods will be sought to identify them in fully turbulent flows, and reduced-order models will be developed and tested. In practical situations, the idea is to be able to detect large eddies and to predict them ‘most of the time’. If simple enough models are found, the process will be implemented in the laboratory and used to suggest control strategies.
Max ERC Funding
2 497 000 €
Duration
Start date: 2016-02-01, End date: 2021-07-31
Project acronym DYCON
Project Dynamic Control and Numerics of Partial Differential Equations
Researcher (PI) Enrique Zuazua
Host Institution (HI) FUNDACION DEUSTO
Country Spain
Call Details Advanced Grant (AdG), PE1, ERC-2015-AdG
Summary This project aims at making a breakthrough contribution in the broad area of Control of Partial Differential Equations (PDE) and their numerical approximation methods by addressing key unsolved issues appearing systematically in real-life applications.
To this end, we pursue three objectives: 1) to contribute with new key theoretical methods and results, 2) to develop the corresponding numerical tools, and 3) to build up new computational software, the DYCON-COMP computational platform, thereby bridging the gap to applications.
The field of PDEs, together with numerical approximation and simulation methods and control theory, have evolved significantly in the last decades in a cross-fertilization process, to address the challenging demands of industrial and cross-disciplinary applications such as, for instance, the management of natural resources, meteorology, aeronautics, oil industry, biomedicine, human and animal collective behaviour, etc. Despite these efforts, some of the key issues still remain unsolved, either because of a lack of analytical understanding, of the absence of efficient numerical solvers, or of a combination of both.
This project identifies and focuses on six key topics that play a central role in most of the processes arising in applications, but which are still poorly understood: control of parameter dependent problems; long time horizon control; control under constraints; inverse design of time-irreversible models; memory models and hybrid PDE/ODE models, and finite versus infinite-dimensional dynamical systems.
These topics cannot be handled by superposing the state of the art in the various disciplines, due to the unexpected interactive phenomena that may emerge, for instance, in the fine numerical approximation of control problems. The coordinated and focused effort that we aim at developing is timely and much needed in order to solve these issues and bridge the gap from modelling to control, computer simulations and applications.
Summary
This project aims at making a breakthrough contribution in the broad area of Control of Partial Differential Equations (PDE) and their numerical approximation methods by addressing key unsolved issues appearing systematically in real-life applications.
To this end, we pursue three objectives: 1) to contribute with new key theoretical methods and results, 2) to develop the corresponding numerical tools, and 3) to build up new computational software, the DYCON-COMP computational platform, thereby bridging the gap to applications.
The field of PDEs, together with numerical approximation and simulation methods and control theory, have evolved significantly in the last decades in a cross-fertilization process, to address the challenging demands of industrial and cross-disciplinary applications such as, for instance, the management of natural resources, meteorology, aeronautics, oil industry, biomedicine, human and animal collective behaviour, etc. Despite these efforts, some of the key issues still remain unsolved, either because of a lack of analytical understanding, of the absence of efficient numerical solvers, or of a combination of both.
This project identifies and focuses on six key topics that play a central role in most of the processes arising in applications, but which are still poorly understood: control of parameter dependent problems; long time horizon control; control under constraints; inverse design of time-irreversible models; memory models and hybrid PDE/ODE models, and finite versus infinite-dimensional dynamical systems.
These topics cannot be handled by superposing the state of the art in the various disciplines, due to the unexpected interactive phenomena that may emerge, for instance, in the fine numerical approximation of control problems. The coordinated and focused effort that we aim at developing is timely and much needed in order to solve these issues and bridge the gap from modelling to control, computer simulations and applications.
Max ERC Funding
2 065 125 €
Duration
Start date: 2016-10-01, End date: 2022-09-30
Project acronym DYNURBAN
Project Urban dynamics: learning from integrated models and big data
Researcher (PI) Diego PUGA
Host Institution (HI) FUNDACION CENTRO DE ESTUDIOS MONETARIOS Y FINANCIEROS
Country Spain
Call Details Advanced Grant (AdG), SH1, ERC-2015-AdG
Summary City growth is driven by a combination of systematic determinants and shocks. Random growth models predict realistic city size distributions but ignore, for instance, the strong empirical association between human capital and city growth. Models with systematic determinants predict degenerate size distributions. We will develop an integrated model that combines systematic and random determinants to explain the link between human capital, entrepreneurship and growth, while generating relevant city size distributions. We will calibrate the model to quantify the contribution of cities to aggregate growth.
Urban growth also has a poorly understood spatial component. Combining gridded data of land use, population, businesses and roads for 3 decennial periods we will track the evolution of land use in the US with an unprecedented level of spatial detail. We will pay particular attention to the magnitude and causes of “slash-and-burn” development: instances when built-up land stops meeting needs in terms of use and intensity and, instead of being redeveloped, it is abandoned while previously open space is built up.
Job-to-job flows across cities matter for efficiency and during the recent crisis they have plummeted. We will study them with individual social security data. Even if there have only been small changes in mismatch between unemployed workers and vacancies during the crisis, if workers shy away from moving to take a job in another city, misallocation can increase substantially.
We will also study commuting flows for Spain and the UK based on anonymized cell phone location records. We will identify urban areas by iteratively aggregating municipalities if more than a given share of transit flows end in the rest of the urban area. We will also measure the extent to which people cross paths with others opening the possibility of personal interactions, and assess the extent to which this generates productivity-enhancing agglomeration economies.
Summary
City growth is driven by a combination of systematic determinants and shocks. Random growth models predict realistic city size distributions but ignore, for instance, the strong empirical association between human capital and city growth. Models with systematic determinants predict degenerate size distributions. We will develop an integrated model that combines systematic and random determinants to explain the link between human capital, entrepreneurship and growth, while generating relevant city size distributions. We will calibrate the model to quantify the contribution of cities to aggregate growth.
Urban growth also has a poorly understood spatial component. Combining gridded data of land use, population, businesses and roads for 3 decennial periods we will track the evolution of land use in the US with an unprecedented level of spatial detail. We will pay particular attention to the magnitude and causes of “slash-and-burn” development: instances when built-up land stops meeting needs in terms of use and intensity and, instead of being redeveloped, it is abandoned while previously open space is built up.
Job-to-job flows across cities matter for efficiency and during the recent crisis they have plummeted. We will study them with individual social security data. Even if there have only been small changes in mismatch between unemployed workers and vacancies during the crisis, if workers shy away from moving to take a job in another city, misallocation can increase substantially.
We will also study commuting flows for Spain and the UK based on anonymized cell phone location records. We will identify urban areas by iteratively aggregating municipalities if more than a given share of transit flows end in the rest of the urban area. We will also measure the extent to which people cross paths with others opening the possibility of personal interactions, and assess the extent to which this generates productivity-enhancing agglomeration economies.
Max ERC Funding
1 292 586 €
Duration
Start date: 2016-08-01, End date: 2021-07-31