Project acronym ABEP
Project Asset Bubbles and Economic Policy
Researcher (PI) Jaume Ventura Fontanet
Host Institution (HI) Centre de Recerca en Economia Internacional (CREI)
Call Details Advanced Grant (AdG), SH1, ERC-2009-AdG
Summary Advanced capitalist economies experience large and persistent movements in asset prices that are difficult to justify with economic fundamentals. The internet bubble of the 1990s and the real state market bubble of the 2000s are two recent examples. The predominant view is that these bubbles are a market failure, and are caused by some form of individual irrationality on the part of market participants. This project is based instead on the view that market participants are individually rational, although this does not preclude sometimes collectively sub-optimal outcomes. Bubbles are thus not a source of market failure by themselves but instead arise as a result of a pre-existing market failure, namely, the existence of pockets of dynamically inefficient investments. Under some conditions, bubbles partly solve this problem, increasing market efficiency and welfare. It is also possible however that bubbles do not solve the underlying problem and, in addition, create negative side-effects. The main objective of this project is to develop this view of asset bubbles, and produce an empirically-relevant macroeconomic framework that allows us to address the following questions: (i) What is the relationship between bubbles and financial market frictions? Special emphasis is given to how the globalization of financial markets and the development of new financial products affect the size and effects of bubbles. (ii) What is the relationship between bubbles, economic growth and unemployment? The theory suggests the presence of virtuous and vicious cycles, as economic growth creates the conditions for bubbles to pop up, while bubbles create incentives for economic growth to happen. (iii) What is the optimal policy to manage bubbles? We need to develop the tools that allow policy makers to sustain those bubbles that have positive effects and burst those that have negative effects.
Summary
Advanced capitalist economies experience large and persistent movements in asset prices that are difficult to justify with economic fundamentals. The internet bubble of the 1990s and the real state market bubble of the 2000s are two recent examples. The predominant view is that these bubbles are a market failure, and are caused by some form of individual irrationality on the part of market participants. This project is based instead on the view that market participants are individually rational, although this does not preclude sometimes collectively sub-optimal outcomes. Bubbles are thus not a source of market failure by themselves but instead arise as a result of a pre-existing market failure, namely, the existence of pockets of dynamically inefficient investments. Under some conditions, bubbles partly solve this problem, increasing market efficiency and welfare. It is also possible however that bubbles do not solve the underlying problem and, in addition, create negative side-effects. The main objective of this project is to develop this view of asset bubbles, and produce an empirically-relevant macroeconomic framework that allows us to address the following questions: (i) What is the relationship between bubbles and financial market frictions? Special emphasis is given to how the globalization of financial markets and the development of new financial products affect the size and effects of bubbles. (ii) What is the relationship between bubbles, economic growth and unemployment? The theory suggests the presence of virtuous and vicious cycles, as economic growth creates the conditions for bubbles to pop up, while bubbles create incentives for economic growth to happen. (iii) What is the optimal policy to manage bubbles? We need to develop the tools that allow policy makers to sustain those bubbles that have positive effects and burst those that have negative effects.
Max ERC Funding
1 000 000 €
Duration
Start date: 2010-04-01, End date: 2015-03-31
Project acronym ACUITY
Project Algorithms for coping with uncertainty and intractability
Researcher (PI) Nikhil Bansal
Host Institution (HI) TECHNISCHE UNIVERSITEIT EINDHOVEN
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary The two biggest challenges in solving practical optimization problems are computational intractability, and the presence
of uncertainty: most problems are either NP-hard, or have incomplete input data which
makes an exact computation impossible.
Recently, there has been a huge progress in our understanding of intractability, based on spectacular algorithmic and lower bound techniques. For several problems, especially those with only local constraints, we can design optimum
approximation algorithms that are provably the best possible.
However, typical optimization problems usually involve complex global constraints and are much less understood. The situation is even worse for coping with uncertainty. Most of the algorithms are based on ad-hoc techniques and there is no deeper understanding of what makes various problems easy or hard.
This proposal describes several new directions, together with concrete intermediate goals, that will break important new ground in the theory of approximation and online algorithms. The particular directions we consider are (i) extend the primal dual method to systematically design online algorithms, (ii) build a structural theory of online problems based on work functions, (iii) develop new tools to use the power of strong convex relaxations and (iv) design new algorithmic approaches based on non-constructive proof techniques.
The proposed research is at the
cutting edge of algorithm design, and builds upon the recent success of the PI in resolving several longstanding questions in these areas. Any progress is likely to be a significant contribution to theoretical
computer science and combinatorial optimization.
Summary
The two biggest challenges in solving practical optimization problems are computational intractability, and the presence
of uncertainty: most problems are either NP-hard, or have incomplete input data which
makes an exact computation impossible.
Recently, there has been a huge progress in our understanding of intractability, based on spectacular algorithmic and lower bound techniques. For several problems, especially those with only local constraints, we can design optimum
approximation algorithms that are provably the best possible.
However, typical optimization problems usually involve complex global constraints and are much less understood. The situation is even worse for coping with uncertainty. Most of the algorithms are based on ad-hoc techniques and there is no deeper understanding of what makes various problems easy or hard.
This proposal describes several new directions, together with concrete intermediate goals, that will break important new ground in the theory of approximation and online algorithms. The particular directions we consider are (i) extend the primal dual method to systematically design online algorithms, (ii) build a structural theory of online problems based on work functions, (iii) develop new tools to use the power of strong convex relaxations and (iv) design new algorithmic approaches based on non-constructive proof techniques.
The proposed research is at the
cutting edge of algorithm design, and builds upon the recent success of the PI in resolving several longstanding questions in these areas. Any progress is likely to be a significant contribution to theoretical
computer science and combinatorial optimization.
Max ERC Funding
1 519 285 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym AdaptiveResponse
Project The evolution of adaptive response mechanisms
Researcher (PI) Franz WEISSING
Host Institution (HI) RIJKSUNIVERSITEIT GRONINGEN
Call Details Advanced Grant (AdG), LS8, ERC-2017-ADG
Summary In an era of rapid climate change there is a pressing need to understand whether and how organisms are able to adapt to novel environments. Such understanding is hampered by a major divide in the life sciences. Disciplines like systems biology or neurobiology make rapid progress in unravelling the mechanisms underlying the responses of organisms to their environment, but this knowledge is insufficiently integrated in eco-evolutionary theory. Current eco-evolutionary models focus on the response patterns themselves, largely neglecting the structures and mechanisms producing these patterns. Here I propose a new, mechanism-oriented framework that views the architecture of adaptation, rather than the resulting responses, as the primary target of natural selection. I am convinced that this change in perspective will yield fundamentally new insights, necessitating the re-evaluation of many seemingly well-established eco-evolutionary principles.
My aim is to develop a comprehensive theory of the eco-evolutionary causes and consequences of the architecture underlying adaptive responses. In three parallel lines of investigation, I will study how architecture is shaped by selection, how evolved response strategies reflect the underlying architecture, and how these responses affect the eco-evolutionary dynamics and the capacity to adapt to novel conditions. All three lines have the potential of making ground-breaking contributions to eco-evolutionary theory, including: the specification of evolutionary tipping points; resolving the puzzle that real organisms evolve much faster than predicted by current theory; a new and general explanation for the evolutionary emergence of individual variation; and a framework for studying the evolution of learning and other general-purpose mechanisms. By making use of concepts from information theory and artificial intelligence, the project will also introduce various methodological innovations.
Summary
In an era of rapid climate change there is a pressing need to understand whether and how organisms are able to adapt to novel environments. Such understanding is hampered by a major divide in the life sciences. Disciplines like systems biology or neurobiology make rapid progress in unravelling the mechanisms underlying the responses of organisms to their environment, but this knowledge is insufficiently integrated in eco-evolutionary theory. Current eco-evolutionary models focus on the response patterns themselves, largely neglecting the structures and mechanisms producing these patterns. Here I propose a new, mechanism-oriented framework that views the architecture of adaptation, rather than the resulting responses, as the primary target of natural selection. I am convinced that this change in perspective will yield fundamentally new insights, necessitating the re-evaluation of many seemingly well-established eco-evolutionary principles.
My aim is to develop a comprehensive theory of the eco-evolutionary causes and consequences of the architecture underlying adaptive responses. In three parallel lines of investigation, I will study how architecture is shaped by selection, how evolved response strategies reflect the underlying architecture, and how these responses affect the eco-evolutionary dynamics and the capacity to adapt to novel conditions. All three lines have the potential of making ground-breaking contributions to eco-evolutionary theory, including: the specification of evolutionary tipping points; resolving the puzzle that real organisms evolve much faster than predicted by current theory; a new and general explanation for the evolutionary emergence of individual variation; and a framework for studying the evolution of learning and other general-purpose mechanisms. By making use of concepts from information theory and artificial intelligence, the project will also introduce various methodological innovations.
Max ERC Funding
2 500 000 €
Duration
Start date: 2018-12-01, End date: 2023-11-30
Project acronym ALGSTRONGCRYPTO
Project Algebraic Methods for Stronger Crypto
Researcher (PI) Ronald John Fitzgerald CRAMER
Host Institution (HI) STICHTING NEDERLANDSE WETENSCHAPPELIJK ONDERZOEK INSTITUTEN
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Our field is cryptology. Our overarching objective is to advance significantly the frontiers in
design and analysis of high-security cryptography for the future generation.
Particularly, we wish to enhance the efficiency, functionality, and, last-but-not-least, fundamental understanding of cryptographic security against very powerful adversaries.
Our approach here is to develop completely novel methods by
deepening, strengthening and broadening the
algebraic foundations of the field.
Concretely, our lens builds on
the arithmetic codex. This is a general, abstract cryptographic primitive whose basic theory we recently developed and whose asymptotic part, which relies on algebraic geometry, enjoys crucial applications in surprising foundational results on constant communication-rate two-party cryptography. A codex is a linear (error correcting) code that, when endowing its ambient vector space just with coordinate-wise multiplication, can be viewed as simulating, up to some degree, richer arithmetical structures such as finite fields (or products thereof), or generally, finite-dimensional algebras over finite fields. Besides this degree, coordinate-localities for which simulation holds and for which it does not at all are also captured.
Our method is based on novel perspectives on codices which significantly
widen their scope and strengthen their utility. Particularly, we bring
symmetries, computational- and complexity theoretic aspects, and connections with algebraic number theory, -geometry, and -combinatorics into play in novel ways. Our applications range from public-key cryptography to secure multi-party computation.
Our proposal is subdivided into 3 interconnected modules:
(1) Algebraic- and Number Theoretical Cryptanalysis
(2) Construction of Algebraic Crypto Primitives
(3) Advanced Theory of Arithmetic Codices
Summary
Our field is cryptology. Our overarching objective is to advance significantly the frontiers in
design and analysis of high-security cryptography for the future generation.
Particularly, we wish to enhance the efficiency, functionality, and, last-but-not-least, fundamental understanding of cryptographic security against very powerful adversaries.
Our approach here is to develop completely novel methods by
deepening, strengthening and broadening the
algebraic foundations of the field.
Concretely, our lens builds on
the arithmetic codex. This is a general, abstract cryptographic primitive whose basic theory we recently developed and whose asymptotic part, which relies on algebraic geometry, enjoys crucial applications in surprising foundational results on constant communication-rate two-party cryptography. A codex is a linear (error correcting) code that, when endowing its ambient vector space just with coordinate-wise multiplication, can be viewed as simulating, up to some degree, richer arithmetical structures such as finite fields (or products thereof), or generally, finite-dimensional algebras over finite fields. Besides this degree, coordinate-localities for which simulation holds and for which it does not at all are also captured.
Our method is based on novel perspectives on codices which significantly
widen their scope and strengthen their utility. Particularly, we bring
symmetries, computational- and complexity theoretic aspects, and connections with algebraic number theory, -geometry, and -combinatorics into play in novel ways. Our applications range from public-key cryptography to secure multi-party computation.
Our proposal is subdivided into 3 interconnected modules:
(1) Algebraic- and Number Theoretical Cryptanalysis
(2) Construction of Algebraic Crypto Primitives
(3) Advanced Theory of Arithmetic Codices
Max ERC Funding
2 447 439 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym ALMP_ECON
Project Effective evaluation of active labour market policies in social insurance programs - improving the interaction between econometric evaluation estimators and economic theory
Researcher (PI) Bas Van Der Klaauw
Host Institution (HI) STICHTING VU
Call Details Starting Grant (StG), SH1, ERC-2007-StG
Summary In most European countries social insurance programs, like welfare, unemployment insurance and disability insurance are characterized by low reemployment rates. Therefore, governments spend huge amounts of money on active labour market programs, which should help individuals in finding work. Recent surveys indicate that programs which aim at intensifying job search behaviour are much more effective than schooling programs for improving human capital. A second conclusion from these surveys is that despite the size of the spendings on these programs, evidence on its effectiveness is limited. This research proposal aims at developing an economic framework that will be used to evaluate the effectiveness of popular programs like offering reemployment bonuses, fraud detection, workfare and job search monitoring. The main innovation is that I will combine economic theory with recently developed econometric techniques and detailed administrative data sets, which have not been explored before. While most of the literature only focuses on short-term outcomes, the available data allow me to also consider the long-term effectiveness of programs. The key advantage of an economic model is that I can compare the effectiveness of the different programs, consider modifications of programs and combinations of programs. Furthermore, using an economic model I can construct profiling measures to improve the targeting of programs to subsamples of the population. This is particularly relevant if the effectiveness of programs differs between individuals or depends on the moment in time the program is offered. Therefore, the results from this research will not only be of scientific interest, but will also be of great value to policymakers.
Summary
In most European countries social insurance programs, like welfare, unemployment insurance and disability insurance are characterized by low reemployment rates. Therefore, governments spend huge amounts of money on active labour market programs, which should help individuals in finding work. Recent surveys indicate that programs which aim at intensifying job search behaviour are much more effective than schooling programs for improving human capital. A second conclusion from these surveys is that despite the size of the spendings on these programs, evidence on its effectiveness is limited. This research proposal aims at developing an economic framework that will be used to evaluate the effectiveness of popular programs like offering reemployment bonuses, fraud detection, workfare and job search monitoring. The main innovation is that I will combine economic theory with recently developed econometric techniques and detailed administrative data sets, which have not been explored before. While most of the literature only focuses on short-term outcomes, the available data allow me to also consider the long-term effectiveness of programs. The key advantage of an economic model is that I can compare the effectiveness of the different programs, consider modifications of programs and combinations of programs. Furthermore, using an economic model I can construct profiling measures to improve the targeting of programs to subsamples of the population. This is particularly relevant if the effectiveness of programs differs between individuals or depends on the moment in time the program is offered. Therefore, the results from this research will not only be of scientific interest, but will also be of great value to policymakers.
Max ERC Funding
550 000 €
Duration
Start date: 2008-07-01, End date: 2013-06-30
Project acronym ANAMMOX
Project Anaerobic ammonium oxidizing bacteria: unique prokayotes with exceptional properties
Researcher (PI) Michael Silvester Maria Jetten
Host Institution (HI) STICHTING KATHOLIEKE UNIVERSITEIT
Call Details Advanced Grant (AdG), LS8, ERC-2008-AdG
Summary For over a century it was believed that ammonium could only be oxidized by microbes in the presence of oxygen. The possibility of anaerobic ammonium oxidation (anammox) was considered impossible. However, about 10 years ago the microbes responsible for the anammox reaction were discovered in a wastewater plant. This was followed by the identification of the responsible bacteria. Recently, the widespread environmental occurrence of the anammox bacteria was demonstrated leading to the realization that anammox bacteria may play a major role in biological nitrogen cycling. The anammox bacteria are unique microbes with many unusual properties. These include the biological turn-over of hydrazine, a well known rocket fuel, the biological synthesis of ladderane lipids, and the presence of a prokaryotic organelle in the cytoplasma of anammox bacteria. The aim of this project is to obtain a fundamental understanding of the metabolism and ecological importance of the anammox bacteria. Such understanding contributes directly to our environment and economy because the anammox bacteria form a new opportunity for nitrogen removal from wastewater, cheaper, with lower carbon dioxide emissions than existing technology. Scientifically the results will contribute to the understanding how hydrazine and dinitrogen gas are made by the anammox bacteria. The research will show which gene products are responsible for the anammox reaction, and how their expression is regulated. Furthermore, the experiments proposed will show if the prokaryotic organelle in anammox bacteria is involved in energy generation. Together the environmental and metabolic data will help to understand why anammox bacteria are so successful in the biogeochemical nitrogen cycle and thus shape our planets atmosphere. The different research lines will employ state of the art microbial and molecular methods to unravel the exceptional properties of these highly unusual and important anammox bacteria.
Summary
For over a century it was believed that ammonium could only be oxidized by microbes in the presence of oxygen. The possibility of anaerobic ammonium oxidation (anammox) was considered impossible. However, about 10 years ago the microbes responsible for the anammox reaction were discovered in a wastewater plant. This was followed by the identification of the responsible bacteria. Recently, the widespread environmental occurrence of the anammox bacteria was demonstrated leading to the realization that anammox bacteria may play a major role in biological nitrogen cycling. The anammox bacteria are unique microbes with many unusual properties. These include the biological turn-over of hydrazine, a well known rocket fuel, the biological synthesis of ladderane lipids, and the presence of a prokaryotic organelle in the cytoplasma of anammox bacteria. The aim of this project is to obtain a fundamental understanding of the metabolism and ecological importance of the anammox bacteria. Such understanding contributes directly to our environment and economy because the anammox bacteria form a new opportunity for nitrogen removal from wastewater, cheaper, with lower carbon dioxide emissions than existing technology. Scientifically the results will contribute to the understanding how hydrazine and dinitrogen gas are made by the anammox bacteria. The research will show which gene products are responsible for the anammox reaction, and how their expression is regulated. Furthermore, the experiments proposed will show if the prokaryotic organelle in anammox bacteria is involved in energy generation. Together the environmental and metabolic data will help to understand why anammox bacteria are so successful in the biogeochemical nitrogen cycle and thus shape our planets atmosphere. The different research lines will employ state of the art microbial and molecular methods to unravel the exceptional properties of these highly unusual and important anammox bacteria.
Max ERC Funding
2 500 000 €
Duration
Start date: 2009-01-01, End date: 2013-12-31
Project acronym ANIMETRICS
Project Measurement-Based Modeling and Animation of Complex Mechanical Phenomena
Researcher (PI) Miguel Angel Otaduy Tristan
Host Institution (HI) UNIVERSIDAD REY JUAN CARLOS
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary Computer animation has traditionally been associated with applications in virtual-reality-based training, video games or feature films. However, interactive animation is gaining relevance in a more general scope, as a tool for early-stage analysis, design and planning in many applications in science and engineering. The user can get quick and visual feedback of the results, and then proceed by refining the experiments or designs. Potential applications include nanodesign, e-commerce or tactile telecommunication, but they also reach as far as, e.g., the analysis of ecological, climate, biological or physiological processes.
The application of computer animation is extremely limited in comparison to its potential outreach due to a trade-off between accuracy and computational efficiency. Such trade-off is induced by inherent complexity sources such as nonlinear or anisotropic behaviors, heterogeneous properties, or high dynamic ranges of effects.
The Animetrics project proposes a modeling and animation methodology, which consists of a multi-scale decomposition of complex processes, the description of the process at each scale through combination of simple local models, and fitting the parameters of those local models using large amounts of data from example effects. The modeling and animation methodology will be explored on specific problems arising in complex mechanical phenomena, including viscoelasticity of solids and thin shells, multi-body contact, granular and liquid flow, and fracture of solids.
Summary
Computer animation has traditionally been associated with applications in virtual-reality-based training, video games or feature films. However, interactive animation is gaining relevance in a more general scope, as a tool for early-stage analysis, design and planning in many applications in science and engineering. The user can get quick and visual feedback of the results, and then proceed by refining the experiments or designs. Potential applications include nanodesign, e-commerce or tactile telecommunication, but they also reach as far as, e.g., the analysis of ecological, climate, biological or physiological processes.
The application of computer animation is extremely limited in comparison to its potential outreach due to a trade-off between accuracy and computational efficiency. Such trade-off is induced by inherent complexity sources such as nonlinear or anisotropic behaviors, heterogeneous properties, or high dynamic ranges of effects.
The Animetrics project proposes a modeling and animation methodology, which consists of a multi-scale decomposition of complex processes, the description of the process at each scale through combination of simple local models, and fitting the parameters of those local models using large amounts of data from example effects. The modeling and animation methodology will be explored on specific problems arising in complex mechanical phenomena, including viscoelasticity of solids and thin shells, multi-body contact, granular and liquid flow, and fracture of solids.
Max ERC Funding
1 277 969 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym APMPAL
Project Asset Prices and Macro Policy when Agents Learn
Researcher (PI) Albert Marcet Torrens
Host Institution (HI) FUNDACIÓ MARKETS, ORGANIZATIONS AND VOTES IN ECONOMICS
Call Details Advanced Grant (AdG), SH1, ERC-2012-ADG_20120411
Summary "A conventional assumption in dynamic models is that agents form their expectations in a very sophisticated manner. In particular, that they have Rational Expectations (RE). We develop some tools to relax this assumption while retaining fully optimal behaviour by agents. We study implications for asset pricing and macro policy.
We assume that agents have a consistent set of beliefs that is close, but not equal, to RE. Agents are ""Internally Rational"", that is, they behave rationally given their system of beliefs. Thus, it is conceptually a small deviation from RE. It provides microfoundations for models of adaptive learning, since the learning algorithm is determined by agents’ optimal behaviour. In previous work we have shown that this framework can match stock price and housing price fluctuations, and that policy implications are quite different.
In this project we intend to: i) develop further the foundations of internally rational (IR) learning, ii) apply this to explain observed asset price price behavior, such as stock prices, bond prices, inflation, commodity derivatives, and exchange rates, iii) extend the IR framework to the case when agents entertain various models, iv) optimal policy under IR learning and under private information when some hidden shocks are not revealed ex-post. Along the way we will address policy issues such as: effects of creating derivative markets, sovereign spread as a signal of sovereign default risk, tests of fiscal sustainability, fiscal policy when agents learn, monetary policy (more specifically, QE measures and interest rate policy), and the role of credibility in macro policy."
Summary
"A conventional assumption in dynamic models is that agents form their expectations in a very sophisticated manner. In particular, that they have Rational Expectations (RE). We develop some tools to relax this assumption while retaining fully optimal behaviour by agents. We study implications for asset pricing and macro policy.
We assume that agents have a consistent set of beliefs that is close, but not equal, to RE. Agents are ""Internally Rational"", that is, they behave rationally given their system of beliefs. Thus, it is conceptually a small deviation from RE. It provides microfoundations for models of adaptive learning, since the learning algorithm is determined by agents’ optimal behaviour. In previous work we have shown that this framework can match stock price and housing price fluctuations, and that policy implications are quite different.
In this project we intend to: i) develop further the foundations of internally rational (IR) learning, ii) apply this to explain observed asset price price behavior, such as stock prices, bond prices, inflation, commodity derivatives, and exchange rates, iii) extend the IR framework to the case when agents entertain various models, iv) optimal policy under IR learning and under private information when some hidden shocks are not revealed ex-post. Along the way we will address policy issues such as: effects of creating derivative markets, sovereign spread as a signal of sovereign default risk, tests of fiscal sustainability, fiscal policy when agents learn, monetary policy (more specifically, QE measures and interest rate policy), and the role of credibility in macro policy."
Max ERC Funding
1 970 260 €
Duration
Start date: 2013-06-01, End date: 2018-08-31
Project acronym APMPAL-HET
Project Asset Prices and Macro Policy when Agents Learn and are Heterogeneous
Researcher (PI) Albert MARCET TORRENS
Host Institution (HI) FUNDACIÓ MARKETS, ORGANIZATIONS AND VOTES IN ECONOMICS
Call Details Advanced Grant (AdG), SH1, ERC-2017-ADG
Summary Based on the APMPAL (ERC) project we continue to develop the frameworks of internal rationality (IR) and optimal signal extraction (OSE). Under IR investors/consumers behave rationally given their subjective beliefs about prices, these beliefs are compatible with data. Under OSE the government has partial information, it knows how policy influences observed variables and signal extraction.
We develop further the foundations of IR and OSE with an emphasis on heterogeneous agents. We study sovereign bond crisis and heterogeneity of beliefs in asset pricing models under IR, using survey data on expectations. Under IR the assets’ stochastic discount factor depends on the agents’ decision function and beliefs; this modifies some key asset pricing results. We extend OSE to models with state variables, forward-looking constraints and heterogeneity.
Under IR agents’ prior beliefs determine the effects of a policy reform. If the government does not observe prior beliefs it has partial information, thus OSE should be used to analyse policy reforms under IR.
If IR heterogeneous workers forecast their productivity either from their own wage or their neighbours’ in a network, low current wages discourage search and human capital accumulation, leading to low productivity. This can explain low development of a country or social exclusion of a group. Worker subsidies redistribute wealth and can increase productivity if they “teach” agents to exit a low-wage state.
We build DSGE models under IR for prediction and policy analysis. We develop time-series tools for predicting macro and asset market variables, using information available to the analyst, and we introduce non-linearities and survey expectations using insights from models under IR.
We study how IR and OSE change the view on macro policy issues such as tax smoothing, debt management, Taylor rule, level of inflation, fiscal/monetary policy coordination, factor taxation or redistribution.
Summary
Based on the APMPAL (ERC) project we continue to develop the frameworks of internal rationality (IR) and optimal signal extraction (OSE). Under IR investors/consumers behave rationally given their subjective beliefs about prices, these beliefs are compatible with data. Under OSE the government has partial information, it knows how policy influences observed variables and signal extraction.
We develop further the foundations of IR and OSE with an emphasis on heterogeneous agents. We study sovereign bond crisis and heterogeneity of beliefs in asset pricing models under IR, using survey data on expectations. Under IR the assets’ stochastic discount factor depends on the agents’ decision function and beliefs; this modifies some key asset pricing results. We extend OSE to models with state variables, forward-looking constraints and heterogeneity.
Under IR agents’ prior beliefs determine the effects of a policy reform. If the government does not observe prior beliefs it has partial information, thus OSE should be used to analyse policy reforms under IR.
If IR heterogeneous workers forecast their productivity either from their own wage or their neighbours’ in a network, low current wages discourage search and human capital accumulation, leading to low productivity. This can explain low development of a country or social exclusion of a group. Worker subsidies redistribute wealth and can increase productivity if they “teach” agents to exit a low-wage state.
We build DSGE models under IR for prediction and policy analysis. We develop time-series tools for predicting macro and asset market variables, using information available to the analyst, and we introduce non-linearities and survey expectations using insights from models under IR.
We study how IR and OSE change the view on macro policy issues such as tax smoothing, debt management, Taylor rule, level of inflation, fiscal/monetary policy coordination, factor taxation or redistribution.
Max ERC Funding
1 524 144 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym AUTAR
Project A Unified Theory of Algorithmic Relaxations
Researcher (PI) Albert Atserias Peri
Host Institution (HI) UNIVERSITAT POLITECNICA DE CATALUNYA
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary For a large family of computational problems collectively known as constrained optimization and satisfaction problems (CSPs), four decades of research in algorithms and computational complexity have led to a theory that tries to classify them as algorithmically tractable vs. intractable, i.e. polynomial-time solvable vs. NP-hard. However, there remains an important gap in our knowledge in that many CSPs of interest resist classification by this theory. Some such problems of practical relevance include fundamental partition problems in graph theory, isomorphism problems in combinatorics, and strategy-design problems in mathematical game theory. To tackle this gap in our knowledge, the research of the last decade has been driven either by finding hard instances for algorithms that solve tighter and tighter relaxations of the original problem, or by formulating new hardness-hypotheses that are stronger but admittedly less robust than NP-hardness.
The ultimate goal of this project is closing the gap between the partial progress that these approaches represent and the original classification project into tractable vs. intractable problems. Our thesis is that the field has reached a point where, in many cases of interest, the analysis of the current candidate algorithms that appear to solve all instances could suffice to classify the problem one way or the other, without the need for alternative hardness-hypotheses. The novelty in our approach is a program to develop our recent discovery that, in some cases of interest, two methods from different areas match in strength: indistinguishability pebble games from mathematical logic, and hierarchies of convex relaxations from mathematical programming. Thus, we aim at making significant advances in the status of important algorithmic problems by looking for a general theory that unifies and goes beyond the current understanding of its components.
Summary
For a large family of computational problems collectively known as constrained optimization and satisfaction problems (CSPs), four decades of research in algorithms and computational complexity have led to a theory that tries to classify them as algorithmically tractable vs. intractable, i.e. polynomial-time solvable vs. NP-hard. However, there remains an important gap in our knowledge in that many CSPs of interest resist classification by this theory. Some such problems of practical relevance include fundamental partition problems in graph theory, isomorphism problems in combinatorics, and strategy-design problems in mathematical game theory. To tackle this gap in our knowledge, the research of the last decade has been driven either by finding hard instances for algorithms that solve tighter and tighter relaxations of the original problem, or by formulating new hardness-hypotheses that are stronger but admittedly less robust than NP-hardness.
The ultimate goal of this project is closing the gap between the partial progress that these approaches represent and the original classification project into tractable vs. intractable problems. Our thesis is that the field has reached a point where, in many cases of interest, the analysis of the current candidate algorithms that appear to solve all instances could suffice to classify the problem one way or the other, without the need for alternative hardness-hypotheses. The novelty in our approach is a program to develop our recent discovery that, in some cases of interest, two methods from different areas match in strength: indistinguishability pebble games from mathematical logic, and hierarchies of convex relaxations from mathematical programming. Thus, we aim at making significant advances in the status of important algorithmic problems by looking for a general theory that unifies and goes beyond the current understanding of its components.
Max ERC Funding
1 725 656 €
Duration
Start date: 2015-06-01, End date: 2020-05-31