Project acronym AMORE
Project A distributional MOdel of Reference to Entities
Researcher (PI) Gemma BOLEDA TORRENT
Host Institution (HI) UNIVERSIDAD POMPEU FABRA
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary "When I asked my seven-year-old daughter ""Who is the boy in your class who was also new in school last year, like you?"", she instantly replied ""Daniel"", using the descriptive content in my utterance to identify an entity in the real world and refer to it. The ability to use language to refer to reality is crucial for humans, and yet it is very difficult to model. AMORE breaks new ground in Computational Linguistics, Linguistics, and Artificial Intelligence by developing a model of linguistic reference to entities implemented as a computational system that can learn its own representations from data.
This interdisciplinary project builds on two complementary semantic traditions: 1) Formal semantics, a symbolic approach that can delimit and track linguistic referents, but does not adequately match them with the descriptive content of linguistic expressions; 2) Distributional semantics, which can handle descriptive content but does not associate it to individuated referents. AMORE synthesizes the two approaches into a unified, scalable model of reference that operates with individuated referents and links them to referential expressions characterized by rich descriptive content. The model is a distributed (neural network) version of a formal semantic framework that is furthermore able to integrate perceptual (visual) and linguistic information about entities. We test it extensively in referential tasks that require matching noun phrases (“the Medicine student”, “the white cat”) with entity representations extracted from text and images.
AMORE advances our scientific understanding of language and its computational modeling, and contributes to the far-reaching debate between symbolic and distributed approaches to cognition with an integrative proposal. I am in a privileged position to carry out this integration, since I have contributed top research in both distributional and formal semantics.
"
Summary
"When I asked my seven-year-old daughter ""Who is the boy in your class who was also new in school last year, like you?"", she instantly replied ""Daniel"", using the descriptive content in my utterance to identify an entity in the real world and refer to it. The ability to use language to refer to reality is crucial for humans, and yet it is very difficult to model. AMORE breaks new ground in Computational Linguistics, Linguistics, and Artificial Intelligence by developing a model of linguistic reference to entities implemented as a computational system that can learn its own representations from data.
This interdisciplinary project builds on two complementary semantic traditions: 1) Formal semantics, a symbolic approach that can delimit and track linguistic referents, but does not adequately match them with the descriptive content of linguistic expressions; 2) Distributional semantics, which can handle descriptive content but does not associate it to individuated referents. AMORE synthesizes the two approaches into a unified, scalable model of reference that operates with individuated referents and links them to referential expressions characterized by rich descriptive content. The model is a distributed (neural network) version of a formal semantic framework that is furthermore able to integrate perceptual (visual) and linguistic information about entities. We test it extensively in referential tasks that require matching noun phrases (“the Medicine student”, “the white cat”) with entity representations extracted from text and images.
AMORE advances our scientific understanding of language and its computational modeling, and contributes to the far-reaching debate between symbolic and distributed approaches to cognition with an integrative proposal. I am in a privileged position to carry out this integration, since I have contributed top research in both distributional and formal semantics.
"
Max ERC Funding
1 499 805 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym ArcheoDyn
Project Globular clusters as living fossils of the past of galaxies
Researcher (PI) Petrus VAN DE VEN
Host Institution (HI) UNIVERSITAT WIEN
Call Details Consolidator Grant (CoG), PE9, ERC-2016-COG
Summary Globular clusters (GCs) are enigmatic objects that hide a wealth of information. They are the living fossils of the history of their native galaxies and the record keepers of the violent events that made them change their domicile. This proposal aims to mine GCs as living fossils of galaxy evolution to address fundamental questions in astrophysics: (1) Do satellite galaxies merge as predicted by the hierarchical build-up of galaxies? (2) Which are the seeds of supermassive black holes in the centres of galaxies? (3) How did star formation originate in the earliest phases of galaxy formation? To answer these questions, novel population-dependent dynamical modelling techniques are required, whose development the PI has led over the past years. This uniquely positions him to take full advantage of the emerging wealth of chemical and kinematical data on GCs.
Following the tidal disruption of satellite galaxies, their dense GCs, and maybe even their nuclei, are left as the most visible remnants in the main galaxy. The hierarchical build-up of their new host galaxy can thus be unearthed by recovering the GCs’ orbits. However, currently it is unclear which of the GCs are accretion survivors. Actually, the existence of a central intermediate mass black hole (IMBH) or of multiple stellar populations in GCs might tell which ones are accreted. At the same time, detection of IMBHs is important as they are predicted seeds for supermassive black holes in galaxies; while the multiple stellar populations in GCs are vital witnesses to the extreme modes of star formation in the early Universe. However, for every putative dynamical IMBH detection so far there is a corresponding non-detection; also the origin of multiple stellar populations in GCs still lacks any uncontrived explanation. The synergy of novel techniques and exquisite data proposed here promises a breakthrough in this emerging field of dynamical archeology with GCs as living fossils of the past of galaxies.
Summary
Globular clusters (GCs) are enigmatic objects that hide a wealth of information. They are the living fossils of the history of their native galaxies and the record keepers of the violent events that made them change their domicile. This proposal aims to mine GCs as living fossils of galaxy evolution to address fundamental questions in astrophysics: (1) Do satellite galaxies merge as predicted by the hierarchical build-up of galaxies? (2) Which are the seeds of supermassive black holes in the centres of galaxies? (3) How did star formation originate in the earliest phases of galaxy formation? To answer these questions, novel population-dependent dynamical modelling techniques are required, whose development the PI has led over the past years. This uniquely positions him to take full advantage of the emerging wealth of chemical and kinematical data on GCs.
Following the tidal disruption of satellite galaxies, their dense GCs, and maybe even their nuclei, are left as the most visible remnants in the main galaxy. The hierarchical build-up of their new host galaxy can thus be unearthed by recovering the GCs’ orbits. However, currently it is unclear which of the GCs are accretion survivors. Actually, the existence of a central intermediate mass black hole (IMBH) or of multiple stellar populations in GCs might tell which ones are accreted. At the same time, detection of IMBHs is important as they are predicted seeds for supermassive black holes in galaxies; while the multiple stellar populations in GCs are vital witnesses to the extreme modes of star formation in the early Universe. However, for every putative dynamical IMBH detection so far there is a corresponding non-detection; also the origin of multiple stellar populations in GCs still lacks any uncontrived explanation. The synergy of novel techniques and exquisite data proposed here promises a breakthrough in this emerging field of dynamical archeology with GCs as living fossils of the past of galaxies.
Max ERC Funding
1 999 250 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym BACCO
Project Bias and Clustering Calculations Optimised: Maximising discovery with galaxy surveys
Researcher (PI) Raúl Esteban ANGULO de la Fuente
Host Institution (HI) FUNDACION CENTRO DE ESTUDIOS DE FISICA DEL COSMOS DE ARAGON
Call Details Starting Grant (StG), PE9, ERC-2016-STG
Summary A new generation of galaxy surveys will soon start measuring the spatial distribution of millions of galaxies over a broad range of redshifts, offering an imminent opportunity to discover new physics. A detailed comparison of these measurements with theoretical models of galaxy clustering may reveal a new fundamental particle, a breakdown of General Relativity, or a hint on the nature of cosmic acceleration. Despite a large progress in the analytic treatment of structure formation in recent years, traditional clustering models still suffer from large uncertainties. This limits cosmological analyses to a very restricted range of scales and statistics, which will be one of the main obstacles to reach a comprehensive exploitation of future surveys.
Here I propose to develop a novel simulation--based approach to predict galaxy clustering. Combining recent advances in computational cosmology, from cosmological N--body calculations to physically-motivated galaxy formation models, I will develop a unified framework to directly predict the position and velocity of individual dark matter structures and galaxies as function of cosmological and astrophysical parameters. In this formulation, galaxy clustering will be a prediction of a set of physical assumptions in a given cosmological setting. The new theoretical framework will be flexible, accurate and fast: it will provide predictions for any clustering statistic, down to scales 100 times smaller than in state-of-the-art perturbation--theory--based models, and in less than 1 minute of CPU time. These advances will enable major improvements in future cosmological constraints, which will significantly increase the overall power of future surveys maximising our potential to discover new physics.
Summary
A new generation of galaxy surveys will soon start measuring the spatial distribution of millions of galaxies over a broad range of redshifts, offering an imminent opportunity to discover new physics. A detailed comparison of these measurements with theoretical models of galaxy clustering may reveal a new fundamental particle, a breakdown of General Relativity, or a hint on the nature of cosmic acceleration. Despite a large progress in the analytic treatment of structure formation in recent years, traditional clustering models still suffer from large uncertainties. This limits cosmological analyses to a very restricted range of scales and statistics, which will be one of the main obstacles to reach a comprehensive exploitation of future surveys.
Here I propose to develop a novel simulation--based approach to predict galaxy clustering. Combining recent advances in computational cosmology, from cosmological N--body calculations to physically-motivated galaxy formation models, I will develop a unified framework to directly predict the position and velocity of individual dark matter structures and galaxies as function of cosmological and astrophysical parameters. In this formulation, galaxy clustering will be a prediction of a set of physical assumptions in a given cosmological setting. The new theoretical framework will be flexible, accurate and fast: it will provide predictions for any clustering statistic, down to scales 100 times smaller than in state-of-the-art perturbation--theory--based models, and in less than 1 minute of CPU time. These advances will enable major improvements in future cosmological constraints, which will significantly increase the overall power of future surveys maximising our potential to discover new physics.
Max ERC Funding
1 484 240 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym BePreSysE
Project Beyond Precision Cosmology: dealing with Systematic Errors
Researcher (PI) Licia VERDE
Host Institution (HI) UNIVERSITAT DE BARCELONA
Call Details Consolidator Grant (CoG), PE9, ERC-2016-COG
Summary Over the past 20 years cosmology has made the transition to a precision science: the standard cosmological model has been established and its parameters are now measured with unprecedented precision. But precision is not enough: accuracy is also crucial. Accuracy accounts for systematic errors which can be both on the observational and on the theory/modelling side (and everywhere in between). While there is a well-defined and developed framework for treating statistical errors, there is no established approach for systematic errors. The next decade will see the era of large surveys; a large coordinated effort of the scientific community in the field is on-going to map the cosmos producing an exponentially growing amount of data. This will shrink the statistical errors, making mitigation and control of systematics of the utmost importance. While there are isolated and targeted efforts to quantify systematic errors and propagate them through all the way to the final results, there is no well-established, self-consistent methodology. To go beyond precision cosmology and reap the benefits of the forthcoming observational program, a systematic approach to systematics is needed. Systematics should be interpreted in the most general sense as shifts between the recovered measured values and true values of physical quantities. I propose to develop a comprehensive approach to tackle systematic errors with the goal to uncover and quantify otherwise unknown differences between the interpretation of a measurement and reality. This will require to fully develop, combine and systematize all approaches proposed so far (many pioneered by the PI), develop new ones to fill the gaps, study and explore their interplay and finally test and validate the procedure. Beyond Precision Cosmology: Dealing with Systematic Errors (BePreSysE) will develop a framework to deal with systematics in forthcoming Cosmological surveys which, could, in principle, be applied beyond Cosmology.
Summary
Over the past 20 years cosmology has made the transition to a precision science: the standard cosmological model has been established and its parameters are now measured with unprecedented precision. But precision is not enough: accuracy is also crucial. Accuracy accounts for systematic errors which can be both on the observational and on the theory/modelling side (and everywhere in between). While there is a well-defined and developed framework for treating statistical errors, there is no established approach for systematic errors. The next decade will see the era of large surveys; a large coordinated effort of the scientific community in the field is on-going to map the cosmos producing an exponentially growing amount of data. This will shrink the statistical errors, making mitigation and control of systematics of the utmost importance. While there are isolated and targeted efforts to quantify systematic errors and propagate them through all the way to the final results, there is no well-established, self-consistent methodology. To go beyond precision cosmology and reap the benefits of the forthcoming observational program, a systematic approach to systematics is needed. Systematics should be interpreted in the most general sense as shifts between the recovered measured values and true values of physical quantities. I propose to develop a comprehensive approach to tackle systematic errors with the goal to uncover and quantify otherwise unknown differences between the interpretation of a measurement and reality. This will require to fully develop, combine and systematize all approaches proposed so far (many pioneered by the PI), develop new ones to fill the gaps, study and explore their interplay and finally test and validate the procedure. Beyond Precision Cosmology: Dealing with Systematic Errors (BePreSysE) will develop a framework to deal with systematics in forthcoming Cosmological surveys which, could, in principle, be applied beyond Cosmology.
Max ERC Funding
1 835 220 €
Duration
Start date: 2017-06-01, End date: 2022-05-31
Project acronym BOSS-WAVES
Project Back-reaction Of Solar plaSma to WAVES
Researcher (PI) Tom VAN DOORSSELAERE
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Consolidator Grant (CoG), PE9, ERC-2016-COG
Summary "The solar coronal heating problem is a long-standing astrophysical problem. The slow DC (reconnection) heating models are well developed in detailed 3D numerical simulations. The fast AC (wave) heating mechanisms have traditionally been neglected since there were no wave observations.
Since 2007, we know that the solar atmosphere is filled with transverse waves, but still we have no adequate models (except for my own 1D analytical models) for their dissipation and plasma heating by these waves. We urgently need to know the contribution of these waves to the coronal heating problem.
In BOSS-WAVES, I will innovate the AC wave heating models by utilising novel 3D numerical simulations of propagating transverse waves. From previous results in my team, I know that the inclusion of the back-reaction of the solar plasma is crucial in understanding the energy dissipation: the wave heating leads to chromospheric evaporation and plasma mixing (by the Kelvin-Helmholtz instability).
BOSS-WAVES will bring the AC heating models to the same level of state-of-the-art DC heating models.
The high-risk, high-gain goals are (1) to create a coronal loop heated by waves, starting from an "empty" corona, by evaporating chromospheric material, and (2) to pioneer models for whole active regions heated by transverse waves."
Summary
"The solar coronal heating problem is a long-standing astrophysical problem. The slow DC (reconnection) heating models are well developed in detailed 3D numerical simulations. The fast AC (wave) heating mechanisms have traditionally been neglected since there were no wave observations.
Since 2007, we know that the solar atmosphere is filled with transverse waves, but still we have no adequate models (except for my own 1D analytical models) for their dissipation and plasma heating by these waves. We urgently need to know the contribution of these waves to the coronal heating problem.
In BOSS-WAVES, I will innovate the AC wave heating models by utilising novel 3D numerical simulations of propagating transverse waves. From previous results in my team, I know that the inclusion of the back-reaction of the solar plasma is crucial in understanding the energy dissipation: the wave heating leads to chromospheric evaporation and plasma mixing (by the Kelvin-Helmholtz instability).
BOSS-WAVES will bring the AC heating models to the same level of state-of-the-art DC heating models.
The high-risk, high-gain goals are (1) to create a coronal loop heated by waves, starting from an "empty" corona, by evaporating chromospheric material, and (2) to pioneer models for whole active regions heated by transverse waves."
Max ERC Funding
1 991 960 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym EVOCLIM
Project Behavioral-evolutionary analysis of climate policy: Bounded rationality, markets and social interactions
Researcher (PI) Jeroen VAN DEN BERGH
Host Institution (HI) UNIVERSITAT AUTONOMA DE BARCELONA
Call Details Advanced Grant (AdG), SH2, ERC-2016-ADG
Summary Distinct climate policies are studied with incomparable approaches involving unique criteria and impacts. I propose to unite core features of such approaches within a behavioral-evolutionary framework, offering three advantages: evaluate the effectiveness of very different climate policy instruments in a consistent and comparative way; examine policy mixes by considering interaction between instruments from a behavioral as well as systemic perspective; and simultaneously assessing policy impacts mediated by markets and social interactions.
The key novelty is linking climate policies to populations of heterogeneous consumer and producers characterized by bounded rationality and social interactions. The resulting models will be used to assess the performance of policy instruments – such as various carbon pricing and information provision instruments – in terms of employment, equity and CO2 emissions. The approach is guided by 5 goals: (1) test robustness of insights on carbon pricing from benchmark approaches that assume representative, rational agents; (2) test contested views on joint employment-climate effects of shifting taxes from labor to carbon; (3) examine various instruments of information provision under distinct assumptions about social preferences and interactions; (4) study regulation of commercial advertising as a climate policy option in the context of status-seeking and high-carbon consumption; and (5) explore behavioral roots of energy/carbon rebound.
The research has a general, conceptual-theoretical rather than a particular country focus. Given the complexity of the developed models, it involves numerical analyses with parameter values in realistic ranges, partly supported by insights from questionnaire-based surveys among consumers and firms. One survey examines information provision instruments and social interaction channels, while another assesses behavioral foundations of rebound. The project will culminate in improved advice on climate policy.
Summary
Distinct climate policies are studied with incomparable approaches involving unique criteria and impacts. I propose to unite core features of such approaches within a behavioral-evolutionary framework, offering three advantages: evaluate the effectiveness of very different climate policy instruments in a consistent and comparative way; examine policy mixes by considering interaction between instruments from a behavioral as well as systemic perspective; and simultaneously assessing policy impacts mediated by markets and social interactions.
The key novelty is linking climate policies to populations of heterogeneous consumer and producers characterized by bounded rationality and social interactions. The resulting models will be used to assess the performance of policy instruments – such as various carbon pricing and information provision instruments – in terms of employment, equity and CO2 emissions. The approach is guided by 5 goals: (1) test robustness of insights on carbon pricing from benchmark approaches that assume representative, rational agents; (2) test contested views on joint employment-climate effects of shifting taxes from labor to carbon; (3) examine various instruments of information provision under distinct assumptions about social preferences and interactions; (4) study regulation of commercial advertising as a climate policy option in the context of status-seeking and high-carbon consumption; and (5) explore behavioral roots of energy/carbon rebound.
The research has a general, conceptual-theoretical rather than a particular country focus. Given the complexity of the developed models, it involves numerical analyses with parameter values in realistic ranges, partly supported by insights from questionnaire-based surveys among consumers and firms. One survey examines information provision instruments and social interaction channels, while another assesses behavioral foundations of rebound. The project will culminate in improved advice on climate policy.
Max ERC Funding
1 943 924 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym FASTPARSE
Project Fast Natural Language Parsing for Large-Scale NLP
Researcher (PI) Carlos GÓMEZ RODRÍGUEZ
Host Institution (HI) UNIVERSIDADE DA CORUNA
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary The popularization of information technology and the Internet has resulted in an unprecedented growth in the scale at which individuals and institutions generate, communicate and access information. In this context, the effective leveraging of the vast amounts of available data to discover and address people's needs is a fundamental problem of modern societies.
Since most of this circulating information is in the form of written or spoken human language, natural language processing (NLP) technologies are a key asset for this crucial goal. NLP can be used to break language barriers (machine translation), find required information (search engines, question answering), monitor public opinion (opinion mining), or digest large amounts of unstructured text into more convenient forms (information extraction, summarization), among other applications.
These and other NLP technologies rely on accurate syntactic parsing to extract or analyze the meaning of sentences. Unfortunately, current state-of-the-art parsing algorithms have high computational costs, processing less than a hundred sentences per second on standard hardware. While this is acceptable for working on small sets of documents, it is clearly prohibitive for large-scale processing, and thus constitutes a major roadblock for the widespread application of NLP.
The goal of this project is to eliminate this bottleneck by developing fast parsers that are suitable for web-scale processing. To do so, FASTPARSE will improve the speed of parsers on several fronts: by avoiding redundant calculations through the reuse of intermediate results from previous sentences; by applying a cognitively-inspired model to compress and recode linguistic information; and by exploiting regularities in human language to find patterns that the parsers can take for granted, avoiding their explicit calculation. The joint application of these techniques will result in much faster parsers that can power all kinds of web-scale NLP applications.
Summary
The popularization of information technology and the Internet has resulted in an unprecedented growth in the scale at which individuals and institutions generate, communicate and access information. In this context, the effective leveraging of the vast amounts of available data to discover and address people's needs is a fundamental problem of modern societies.
Since most of this circulating information is in the form of written or spoken human language, natural language processing (NLP) technologies are a key asset for this crucial goal. NLP can be used to break language barriers (machine translation), find required information (search engines, question answering), monitor public opinion (opinion mining), or digest large amounts of unstructured text into more convenient forms (information extraction, summarization), among other applications.
These and other NLP technologies rely on accurate syntactic parsing to extract or analyze the meaning of sentences. Unfortunately, current state-of-the-art parsing algorithms have high computational costs, processing less than a hundred sentences per second on standard hardware. While this is acceptable for working on small sets of documents, it is clearly prohibitive for large-scale processing, and thus constitutes a major roadblock for the widespread application of NLP.
The goal of this project is to eliminate this bottleneck by developing fast parsers that are suitable for web-scale processing. To do so, FASTPARSE will improve the speed of parsers on several fronts: by avoiding redundant calculations through the reuse of intermediate results from previous sentences; by applying a cognitively-inspired model to compress and recode linguistic information; and by exploiting regularities in human language to find patterns that the parsers can take for granted, avoiding their explicit calculation. The joint application of these techniques will result in much faster parsers that can power all kinds of web-scale NLP applications.
Max ERC Funding
1 481 747 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym FINEPRINT
Project Spatially explicit material footprints: fine-scale assessment of Europe’s global environmental and social impacts
Researcher (PI) Stefan Giljum
Host Institution (HI) WIRTSCHAFTSUNIVERSITAT WIEN
Call Details Consolidator Grant (CoG), SH2, ERC-2016-COG
Summary In the era of globalisation, supply chains are increasingly organised on the international level, thus disconnecting final consumption from the location of material extraction and related environmental and social impacts. Reducing these global impacts – or footprints – of European consumption is a major societal and scientific challenge. Methods to assess teleconnections between distant places of raw material extraction and consumption along global supply chains have improved significantly, with multi-regional input-output (MRIO) analysis being the most prominent method applied. However, the limited spatial resolution of MRIO models distorts footprint calculations, as specific properties of raw materials as well as impacts of extraction can vary significantly within production countries. I therefore propose a new method for the calculation of fine-scale material consumption footprints. It will encompass (1) a spatial assessment of global material extraction on a high-resolution grid and (2) a detailed physical model that tracks raw materials from the location of extraction via international transport facilities to processing industries in importing countries. Integrating this very detailed spatial information with a MRIO model will enable the first fine-scale assessment of European countries’ material footprints, overcoming prevailing aggregation errors in footprint indicators. Furthermore, I will investigate environmental and social impacts related to material footprints through linking the spatially explicit multi-regional material flow model with datasets on impacts related to raw material extraction, such as increasing water scarcity, deforestation and mining conflicts. This project will not only lift the accuracy of footprint models to a new level, but will also open up a range of options for sustainability assessments of specific commodity flows. Building on this knowledge, targeted policy instruments for sustainable product supply chains can be designed.
Summary
In the era of globalisation, supply chains are increasingly organised on the international level, thus disconnecting final consumption from the location of material extraction and related environmental and social impacts. Reducing these global impacts – or footprints – of European consumption is a major societal and scientific challenge. Methods to assess teleconnections between distant places of raw material extraction and consumption along global supply chains have improved significantly, with multi-regional input-output (MRIO) analysis being the most prominent method applied. However, the limited spatial resolution of MRIO models distorts footprint calculations, as specific properties of raw materials as well as impacts of extraction can vary significantly within production countries. I therefore propose a new method for the calculation of fine-scale material consumption footprints. It will encompass (1) a spatial assessment of global material extraction on a high-resolution grid and (2) a detailed physical model that tracks raw materials from the location of extraction via international transport facilities to processing industries in importing countries. Integrating this very detailed spatial information with a MRIO model will enable the first fine-scale assessment of European countries’ material footprints, overcoming prevailing aggregation errors in footprint indicators. Furthermore, I will investigate environmental and social impacts related to material footprints through linking the spatially explicit multi-regional material flow model with datasets on impacts related to raw material extraction, such as increasing water scarcity, deforestation and mining conflicts. This project will not only lift the accuracy of footprint models to a new level, but will also open up a range of options for sustainability assessments of specific commodity flows. Building on this knowledge, targeted policy instruments for sustainable product supply chains can be designed.
Max ERC Funding
1 999 909 €
Duration
Start date: 2017-07-01, End date: 2022-06-30
Project acronym MAT_STOCKS
Project Understanding the Role of Material Stock Patterns for the Transformation to a Sustainable Society
Researcher (PI) Helmut Haberl
Host Institution (HI) UNIVERSITAET FUER BODENKULTUR WIEN
Call Details Advanced Grant (AdG), SH2, ERC-2016-ADG
Summary Sustainability transformations imply fundamental changes in the societal use of biophysical resources. Current socioeconomic metabolism research traces flows of energy, materials or substances to capture resource use: input of raw materials or energy, their fate in production and consumption, and the discharge of wastes and emissions. This approach has yielded important insights into eco-efficiency and long-term drivers of resource use. But due to its focus on flows, socio-metabolic research has not yet incorporated material stocks or their services, thereby not fully realizing its analytic potential. MAT_STOCKS addresses this gap by developing a consistent typology, indicators and databases of material stocks and their services, building upon economy-wide material flow analysis. It will create a comprehensive, global, national-level, validated material stocks and services database as well as maps of material stocks from remote-sensing data. This will allow analyzing the stock/flow/service nexus and underpin highly innovative indicators of eco-efficiency overcoming limitations of current approaches which mainly relate resource use or emissions to population and GDP. New insights on stock/flow/service relations, the relevance of spatial patterns and options for decoupling will be used to create a dynamic model to assess option spaces for transformations towards sustainable metabolism. MAT_STOCKS will identify barriers and leverage points for future sustainability transformations and the SDGs, and elucidate their socio-ecological and political implications. Our preliminary analyses suggest that unravelling the stock/flow/service nexus provides a crucial missing link in socio-metabolic research because it explains why, how and where patterns of material and energy use change or remain locked-in. Thereby, important analytical insights will be introduced into the largely normative and local discourses on the transformation towards a sustainable society.
Summary
Sustainability transformations imply fundamental changes in the societal use of biophysical resources. Current socioeconomic metabolism research traces flows of energy, materials or substances to capture resource use: input of raw materials or energy, their fate in production and consumption, and the discharge of wastes and emissions. This approach has yielded important insights into eco-efficiency and long-term drivers of resource use. But due to its focus on flows, socio-metabolic research has not yet incorporated material stocks or their services, thereby not fully realizing its analytic potential. MAT_STOCKS addresses this gap by developing a consistent typology, indicators and databases of material stocks and their services, building upon economy-wide material flow analysis. It will create a comprehensive, global, national-level, validated material stocks and services database as well as maps of material stocks from remote-sensing data. This will allow analyzing the stock/flow/service nexus and underpin highly innovative indicators of eco-efficiency overcoming limitations of current approaches which mainly relate resource use or emissions to population and GDP. New insights on stock/flow/service relations, the relevance of spatial patterns and options for decoupling will be used to create a dynamic model to assess option spaces for transformations towards sustainable metabolism. MAT_STOCKS will identify barriers and leverage points for future sustainability transformations and the SDGs, and elucidate their socio-ecological and political implications. Our preliminary analyses suggest that unravelling the stock/flow/service nexus provides a crucial missing link in socio-metabolic research because it explains why, how and where patterns of material and energy use change or remain locked-in. Thereby, important analytical insights will be introduced into the largely normative and local discourses on the transformation towards a sustainable society.
Max ERC Funding
2 483 686 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym MIGRADEMO
Project Migration and Democratic Diffusion: Comparing the Impact of Migration on Democratic Participation and Processes in Countries of Origin
Researcher (PI) Eva Kristine ØSTERGAARD-NIELSEN
Host Institution (HI) UNIVERSITAT AUTONOMA DE BARCELONA
Call Details Consolidator Grant (CoG), SH2, ERC-2016-COG
Summary The objective of this project is to unravel the impact of migration on democratic participation and processes in countries of origin. Both international migration and democratic development are important contemporary public and policy concerns. Recent studies of transnational migrant practices have uncovered how migrants influence democratic participation in their homelands through the remittance of money and newfound ideas about democracy from afar or through return. Yet, there is still little comparative knowledge of how these processes intersect with broader economic, social and political transformations in countries of origin. Moreover, complex migration experiences and nonlinear processes of democratization in countries of origin point to the need for a more nuanced conceptualization of what kind of political ideas circulate and are negotiated among migrants, return migrants and non-migrants in countries of origin.
The proposed project outlines an ambitious long-term comparative research strategy to analyse and theorize the scope and dynamics of processes of democratic diffusion through migration. The research strategy of the project is innovative in combining analysis of democratic diffusion across three countries of origin and at three levels of democratic participation and processes: individual citizens, civil society and among political leaders and representatives. To that end the project draws on both statistical and qualitative research methods and analysis. The project will analyse already existing aggregate data on remittances and political behaviour and, importantly, generate new comprehensive datasets based on surveys and in-depth qualitative research among non-migrants and returnees in countries of origin. Consequently, the project will contribute to our theoretical understanding of the conditions under which migration can influence democratic processes as well as the broader research fields of democracy studies, migration and citizenship.
Summary
The objective of this project is to unravel the impact of migration on democratic participation and processes in countries of origin. Both international migration and democratic development are important contemporary public and policy concerns. Recent studies of transnational migrant practices have uncovered how migrants influence democratic participation in their homelands through the remittance of money and newfound ideas about democracy from afar or through return. Yet, there is still little comparative knowledge of how these processes intersect with broader economic, social and political transformations in countries of origin. Moreover, complex migration experiences and nonlinear processes of democratization in countries of origin point to the need for a more nuanced conceptualization of what kind of political ideas circulate and are negotiated among migrants, return migrants and non-migrants in countries of origin.
The proposed project outlines an ambitious long-term comparative research strategy to analyse and theorize the scope and dynamics of processes of democratic diffusion through migration. The research strategy of the project is innovative in combining analysis of democratic diffusion across three countries of origin and at three levels of democratic participation and processes: individual citizens, civil society and among political leaders and representatives. To that end the project draws on both statistical and qualitative research methods and analysis. The project will analyse already existing aggregate data on remittances and political behaviour and, importantly, generate new comprehensive datasets based on surveys and in-depth qualitative research among non-migrants and returnees in countries of origin. Consequently, the project will contribute to our theoretical understanding of the conditions under which migration can influence democratic processes as well as the broader research fields of democracy studies, migration and citizenship.
Max ERC Funding
1 451 264 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym POLMAG
Project Polarized Radiation Diagnostics for Exploring the Magnetism of the Outer Solar Atmosphere
Researcher (PI) Javier Trujillo Bueno
Host Institution (HI) INSTITUTO DE ASTROFISICA DE CANARIAS
Call Details Advanced Grant (AdG), PE9, ERC-2016-ADG
Summary POLMAG aims at a true breakthrough in the development and application of polarized radiation diagnostic methods for exploring the magnetic fields of the chromosphere, transition region and corona of the Sun via the interpretation of the Stokes profiles produced by optically polarized atoms and the Hanle and Zeeman effects in ultraviolet (UV), visible and near-infrared spectral lines. To this end, POLMAG will combine and expand expertise on atomic physics, on the quantum theory of radiation, on high-precision spectropolarimetry, on advanced methods in numerical radiative transfer, and on the confrontation of spectropolarimetric observations with spectral synthesis in increasingly realistic three-dimensional (3D) numerical models of the solar atmosphere.
POLMAG targets the following very challenging issues:
- Which are the optimum spectral lines for probing the magnetism of the outer solar atmosphere?
- How to compute efficiently the Stokes profiles taking into account partial frequency redistribution, J-state quantum interference and the Hanle and Zeeman effects?
- How to determine the magnetic, thermal and dynamic structure of the outer solar atmosphere through confrontations with spectropolarimetric observations?
POLMAG will go well beyond the current state of the art as follows:
- Applying and extending the quantum theory of light polarization
- Developing and applying efficient radiative transfer codes
- Modeling the Ly-alpha and Mg II h & k observations of our CLASP suborbital rocket experiments
- Developing novel coronal magnetometry methods by complementing for the first time the information provided by forbidden and permitted lines
- Developing the plasma diagnostic techniques needed for the scientific exploitation of spectropolarimetric observations with the new generation of solar telescopes and putting them at the disposal of the astrophysical community
POLMAG will open up a new diagnostic window in astrophysics.
Summary
POLMAG aims at a true breakthrough in the development and application of polarized radiation diagnostic methods for exploring the magnetic fields of the chromosphere, transition region and corona of the Sun via the interpretation of the Stokes profiles produced by optically polarized atoms and the Hanle and Zeeman effects in ultraviolet (UV), visible and near-infrared spectral lines. To this end, POLMAG will combine and expand expertise on atomic physics, on the quantum theory of radiation, on high-precision spectropolarimetry, on advanced methods in numerical radiative transfer, and on the confrontation of spectropolarimetric observations with spectral synthesis in increasingly realistic three-dimensional (3D) numerical models of the solar atmosphere.
POLMAG targets the following very challenging issues:
- Which are the optimum spectral lines for probing the magnetism of the outer solar atmosphere?
- How to compute efficiently the Stokes profiles taking into account partial frequency redistribution, J-state quantum interference and the Hanle and Zeeman effects?
- How to determine the magnetic, thermal and dynamic structure of the outer solar atmosphere through confrontations with spectropolarimetric observations?
POLMAG will go well beyond the current state of the art as follows:
- Applying and extending the quantum theory of light polarization
- Developing and applying efficient radiative transfer codes
- Modeling the Ly-alpha and Mg II h & k observations of our CLASP suborbital rocket experiments
- Developing novel coronal magnetometry methods by complementing for the first time the information provided by forbidden and permitted lines
- Developing the plasma diagnostic techniques needed for the scientific exploitation of spectropolarimetric observations with the new generation of solar telescopes and putting them at the disposal of the astrophysical community
POLMAG will open up a new diagnostic window in astrophysics.
Max ERC Funding
2 478 750 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym QUALIDEM
Project Eroding Democracies. A qualitative (re-)appraisal of how policies shape democratic linkages in Western democracies
Researcher (PI) Virginie VAN INGELGOM
Host Institution (HI) UNIVERSITE CATHOLIQUE DE LOUVAIN
Call Details Starting Grant (StG), SH2, ERC-2016-STG
Summary The future consolidation or erosion of western democracies depends on the political perceptions, experiences and participation of ordinary citizens. Even when they disagree on the implications of their findings, previous studies stress that both attitudinal and behavioural forms of democratic linkages – political trust, political support, loyalty, formal and informal participation – have come under considerable pressure in recent decades. The QUALIDEM project offers a qualitative (re)appraisal of citizens’ (dis-)affection towards politics by relying on the core argument of the policy feedback literature: attitudes and behaviours are outcomes of past policy. It aims to explain the evolutions of democratic linkages as being shaped by public policy, and specifically by the turn to neoliberalism and supranationalisation. It aims to systematically analyse the domestic and socially differentiated effects of both of these major macro transformations to citizens’ representations and experiences of politics, as an addition to the existing emphasis on individual determinants and the existing contextual explanations of disengagement and disaffection towards politics. On the theoretical level, this project therefore aims to build bridges between scholars of public policy and students of mass politics. On the empirical level, QUALIDEM relies on the reanalysis of qualitative data – interviews and focus groups – from a diachronic and comparative perspective focusing on four Western European countries (Belgium, France, Germany and the UK) with the US serving as a counterpoint. It will renew the methodological approach to the question of ordinary citizens’ disengagement and disaffection by providing a detailed and empirically-grounded understanding of the mechanisms of production and change in democratic linkages. It will develop an innovative methodological infrastructure for the storage of and access to twenty years of qualitative European comparative surveys.
Summary
The future consolidation or erosion of western democracies depends on the political perceptions, experiences and participation of ordinary citizens. Even when they disagree on the implications of their findings, previous studies stress that both attitudinal and behavioural forms of democratic linkages – political trust, political support, loyalty, formal and informal participation – have come under considerable pressure in recent decades. The QUALIDEM project offers a qualitative (re)appraisal of citizens’ (dis-)affection towards politics by relying on the core argument of the policy feedback literature: attitudes and behaviours are outcomes of past policy. It aims to explain the evolutions of democratic linkages as being shaped by public policy, and specifically by the turn to neoliberalism and supranationalisation. It aims to systematically analyse the domestic and socially differentiated effects of both of these major macro transformations to citizens’ representations and experiences of politics, as an addition to the existing emphasis on individual determinants and the existing contextual explanations of disengagement and disaffection towards politics. On the theoretical level, this project therefore aims to build bridges between scholars of public policy and students of mass politics. On the empirical level, QUALIDEM relies on the reanalysis of qualitative data – interviews and focus groups – from a diachronic and comparative perspective focusing on four Western European countries (Belgium, France, Germany and the UK) with the US serving as a counterpoint. It will renew the methodological approach to the question of ordinary citizens’ disengagement and disaffection by providing a detailed and empirically-grounded understanding of the mechanisms of production and change in democratic linkages. It will develop an innovative methodological infrastructure for the storage of and access to twenty years of qualitative European comparative surveys.
Max ERC Funding
1 491 659 €
Duration
Start date: 2017-04-01, End date: 2022-03-31
Project acronym STYDS
Project Seeing things you don't see: Unifying the philosophy, psychology and neuroscience of multimodal mental imagery
Researcher (PI) Bence Gyorgy NANAY
Host Institution (HI) UNIVERSITEIT ANTWERPEN
Call Details Consolidator Grant (CoG), SH4, ERC-2016-COG
Summary When I am looking at my coffee machine that makes funny noises, this is an instance of multisensory perception – I perceive this event by means of both vision and audition. But very often we only receive sensory stimulation from a multisensory event by means of one sense modality. If I hear the noisy coffee machine in the next room (without seeing it), then how do I represent the visual aspects of this multisensory event?
The aim of this research project is to bring together empirical findings about multimodal perception and empirical findings about (visual, auditory, tactile) mental imagery and argue that on occasions like the one described in the last paragraph, we have multimodal mental imagery: perceptual processing in one sense modality (here: vision) that is triggered by sensory stimulation in another sense modality (here: audition).
Multimodal mental imagery is rife. The vast majority of what we perceive are multisensory events: events that can be perceived in more than one sense modality – like the noisy coffee machine. And most of the time we are only acquainted with these multisensory events via a subset of the sense modalities involved – all the other aspects of these events are represented by means of multisensory mental imagery. This means that multisensory mental imagery is a crucial element of almost all instances of everyday perception, which has wider implications to philosophy of perception and beyond, to epistemological questions about whether we can trust our senses.
Focusing on multimodal mental imagery can help us to understand a number of puzzling perceptual phenomena, like sensory substitution and synaesthesia. Further, manipulating mental imagery has recently become an important clinical procedure in various branches of psychiatry as well as in counteracting implicit bias – using multimodal mental imagery rather than voluntarily and consciously conjured up mental imagery can lead to real progress in these experimental paradigms.
Summary
When I am looking at my coffee machine that makes funny noises, this is an instance of multisensory perception – I perceive this event by means of both vision and audition. But very often we only receive sensory stimulation from a multisensory event by means of one sense modality. If I hear the noisy coffee machine in the next room (without seeing it), then how do I represent the visual aspects of this multisensory event?
The aim of this research project is to bring together empirical findings about multimodal perception and empirical findings about (visual, auditory, tactile) mental imagery and argue that on occasions like the one described in the last paragraph, we have multimodal mental imagery: perceptual processing in one sense modality (here: vision) that is triggered by sensory stimulation in another sense modality (here: audition).
Multimodal mental imagery is rife. The vast majority of what we perceive are multisensory events: events that can be perceived in more than one sense modality – like the noisy coffee machine. And most of the time we are only acquainted with these multisensory events via a subset of the sense modalities involved – all the other aspects of these events are represented by means of multisensory mental imagery. This means that multisensory mental imagery is a crucial element of almost all instances of everyday perception, which has wider implications to philosophy of perception and beyond, to epistemological questions about whether we can trust our senses.
Focusing on multimodal mental imagery can help us to understand a number of puzzling perceptual phenomena, like sensory substitution and synaesthesia. Further, manipulating mental imagery has recently become an important clinical procedure in various branches of psychiatry as well as in counteracting implicit bias – using multimodal mental imagery rather than voluntarily and consciously conjured up mental imagery can lead to real progress in these experimental paradigms.
Max ERC Funding
1 966 530 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym TRADEPOWER
Project Power in international trade negotiations
Researcher (PI) Andreas DUER
Host Institution (HI) PARIS-LODRON-UNIVERSITAT SALZBURG
Call Details Consolidator Grant (CoG), SH2, ERC-2016-COG
Summary For the last twenty years, countries across the globe have negotiated a large number of preferential trade agreements. In parallel, trade negotiations have taken place in the framework of the World Trade Organization. These negotiations not only deal with tariffs, but also cover investments, competition policy, labour standards and much more. With much at stake, the extent to which different countries are able to achieve their preferred outcomes in these negotiations is of broad interest. In this project, I address this topic by asking: what makes some countries have more bargaining power than others in these negotiations? In other words, what explains variation in bargaining power in trade negotiations?
My approach to these questions is ground-breaking in terms of theory, empirical research and methodology:
1.) I develop an original theoretical argument that links the globalization of production to bargaining power in trade negotiations. Concretely, I argue that the offshoring of production reduces the importance of market size in trade negotiations. The argument leads to the expectation of systematic variation in bargaining power over time, and across pairs of countries and sectors.
2.) I will collect novel and systematic data to test this argument, going far beyond the empirical evidence currently used to assess bargaining power in trade negotiations. The empirical research will bring together qualitative evidence from case studies with quantitative evidence on both the perception of power and the actual outcomes of trade negotiations.
3.) I will innovate methodologically by combining and comparing three approaches to measuring bargaining power, namely process tracing, attributed influence and preference attainment.
The project will make a key contribution not only to the literature on bargaining power in international trade negotiations, but also to research on, e.g., international development, international institutions and the political economy of trade.
Summary
For the last twenty years, countries across the globe have negotiated a large number of preferential trade agreements. In parallel, trade negotiations have taken place in the framework of the World Trade Organization. These negotiations not only deal with tariffs, but also cover investments, competition policy, labour standards and much more. With much at stake, the extent to which different countries are able to achieve their preferred outcomes in these negotiations is of broad interest. In this project, I address this topic by asking: what makes some countries have more bargaining power than others in these negotiations? In other words, what explains variation in bargaining power in trade negotiations?
My approach to these questions is ground-breaking in terms of theory, empirical research and methodology:
1.) I develop an original theoretical argument that links the globalization of production to bargaining power in trade negotiations. Concretely, I argue that the offshoring of production reduces the importance of market size in trade negotiations. The argument leads to the expectation of systematic variation in bargaining power over time, and across pairs of countries and sectors.
2.) I will collect novel and systematic data to test this argument, going far beyond the empirical evidence currently used to assess bargaining power in trade negotiations. The empirical research will bring together qualitative evidence from case studies with quantitative evidence on both the perception of power and the actual outcomes of trade negotiations.
3.) I will innovate methodologically by combining and comparing three approaches to measuring bargaining power, namely process tracing, attributed influence and preference attainment.
The project will make a key contribution not only to the literature on bargaining power in international trade negotiations, but also to research on, e.g., international development, international institutions and the political economy of trade.
Max ERC Funding
1 705 833 €
Duration
Start date: 2017-07-01, End date: 2022-06-30