Project acronym AFRIVAL
Project African river basins: catchment-scale carbon fluxes and transformations
Researcher (PI) Steven Bouillon
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE10, ERC-2009-StG
Summary This proposal wishes to fundamentally improve our understanding of the role of tropical freshwater ecosystems in carbon (C) cycling on the catchment scale. It uses an unprecedented combination of state-of-the-art proxies such as stable isotope, 14C and biomarker signatures to characterize organic matter, radiogenic isotope signatures to determine particle residence times, as well as field measurements of relevant biogeochemical processes. We focus on tropical systems since there is a striking lack of data on such systems, even though riverine C transport is thought to be disproportionately high in tropical areas. Furthermore, the presence of landscape-scale contrasts in vegetation (in particular, C3 vs. C4 plants) are an important asset in the use of stable isotopes as natural tracers of C cycling processes on this scale. Freshwater ecosystems are an important component in the global C cycle, and the primary link between terrestrial and marine ecosystems. Recent estimates indicate that ~2 Pg C y-1 (Pg=Petagram) enter freshwater systems, i.e., about twice the estimated global terrestrial C sink. More than half of this is thought to be remineralized before it reaches the coastal zone, and for the Amazon basin this has even been suggested to be ~90% of the lateral C inputs. The question how general these patterns are is a matter of debate, and assessing the mechanisms determining the degree of processing versus transport of organic carbon in lakes and river systems is critical to further constrain their role in the global C cycle. This proposal provides an interdisciplinary approach to describe and quantify catchment-scale C transport and cycling in tropical river basins. Besides conceptual and methodological advances, and a significant expansion of our dataset on C processes in such systems, new data gathered in this project are likely to provide exciting and novel hypotheses on the functioning of freshwater systems and their linkage to the terrestrial C budget.
Summary
This proposal wishes to fundamentally improve our understanding of the role of tropical freshwater ecosystems in carbon (C) cycling on the catchment scale. It uses an unprecedented combination of state-of-the-art proxies such as stable isotope, 14C and biomarker signatures to characterize organic matter, radiogenic isotope signatures to determine particle residence times, as well as field measurements of relevant biogeochemical processes. We focus on tropical systems since there is a striking lack of data on such systems, even though riverine C transport is thought to be disproportionately high in tropical areas. Furthermore, the presence of landscape-scale contrasts in vegetation (in particular, C3 vs. C4 plants) are an important asset in the use of stable isotopes as natural tracers of C cycling processes on this scale. Freshwater ecosystems are an important component in the global C cycle, and the primary link between terrestrial and marine ecosystems. Recent estimates indicate that ~2 Pg C y-1 (Pg=Petagram) enter freshwater systems, i.e., about twice the estimated global terrestrial C sink. More than half of this is thought to be remineralized before it reaches the coastal zone, and for the Amazon basin this has even been suggested to be ~90% of the lateral C inputs. The question how general these patterns are is a matter of debate, and assessing the mechanisms determining the degree of processing versus transport of organic carbon in lakes and river systems is critical to further constrain their role in the global C cycle. This proposal provides an interdisciplinary approach to describe and quantify catchment-scale C transport and cycling in tropical river basins. Besides conceptual and methodological advances, and a significant expansion of our dataset on C processes in such systems, new data gathered in this project are likely to provide exciting and novel hypotheses on the functioning of freshwater systems and their linkage to the terrestrial C budget.
Max ERC Funding
1 745 262 €
Duration
Start date: 2009-10-01, End date: 2014-09-30
Project acronym ARCHGLASS
Project Archaeometry and Archaeology of Ancient Glass Production as a Source for Ancient Technology and Trade of Raw Materials
Researcher (PI) Patrick Degryse
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), SH6, ERC-2009-StG
Summary In this project, innovative techniques to reconstruct ancient economies are developed and new insights in the trade and processing of mineral raw materials are gained based on interdisciplinary archaeological and archaeometrical research. An innovative methodology for and a practical provenance database of the primary origin of natron glass from the Hellenistic-Roman world will be established. The project investigates both production and consumer sites of glass raw materials using both typo-chronological and archaeometrical (isotope geochemical) study of finished glass artefacts at consumer sites as well as mineralogical and chemical characterisation of raw glass and mineral resources at primary production sites. Suitable sand resources in the locations described by ancient authors will be identified through geological prospecting on the basis of literature review and field work. Sand and flux (natron) deposits will be mineralogically and geochemically characterised and compared to the results of the archaeological and geochemical investigations of the glass. Through integrated typo-chronological and archaeometrical analysis, the possible occurrence of primary production centres of raw glass outside the known locations in Syro-Palestine and Egypt, particularly in North-Africa, Italy, Spain and Gaul will be critically studied. In this way, historical, archaeological and archaeometrical data are combined, developing new interdisciplinary techniques for innovative archaeological interpretation of glass trade in the Hellenistic-Roman world.
Summary
In this project, innovative techniques to reconstruct ancient economies are developed and new insights in the trade and processing of mineral raw materials are gained based on interdisciplinary archaeological and archaeometrical research. An innovative methodology for and a practical provenance database of the primary origin of natron glass from the Hellenistic-Roman world will be established. The project investigates both production and consumer sites of glass raw materials using both typo-chronological and archaeometrical (isotope geochemical) study of finished glass artefacts at consumer sites as well as mineralogical and chemical characterisation of raw glass and mineral resources at primary production sites. Suitable sand resources in the locations described by ancient authors will be identified through geological prospecting on the basis of literature review and field work. Sand and flux (natron) deposits will be mineralogically and geochemically characterised and compared to the results of the archaeological and geochemical investigations of the glass. Through integrated typo-chronological and archaeometrical analysis, the possible occurrence of primary production centres of raw glass outside the known locations in Syro-Palestine and Egypt, particularly in North-Africa, Italy, Spain and Gaul will be critically studied. In this way, historical, archaeological and archaeometrical data are combined, developing new interdisciplinary techniques for innovative archaeological interpretation of glass trade in the Hellenistic-Roman world.
Max ERC Funding
954 960 €
Duration
Start date: 2009-11-01, End date: 2014-10-31
Project acronym CHINA
Project Trade, Productivity, and Firm Capabilities in China's Manufacturing Sector
Researcher (PI) Johannes Van Biesebroeck
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), SH1, ERC-2009-StG
Summary China s economy has expanded at breakneck speed to become the 3rd largest trading country in the world and the largest recipient of foreign direct investment (FDI). Entry into the WTO in 2001 was a landmark event in this ongoing process and I propose to study several channels through which it spurred China s industrial development. Crucially, I will take an integrated view of the different ways in which Chinese and Western firms interact: through trade flows, as suppliers or competitors, FDI, or knowledge transfers. First, I investigate the existence and magnitude of a causal link from the trade reforms to productivity growth. Second, I look for evidence of capability upgrading, such as increased production efficiency, an ability to produce higher quality products, or introduce new products by innovating. Third, I study the mechanisms for the impact of trade and FDI on local firms, in particular assessing the relative importance of increased market competition and the transfer of know-how from foreign firms. For this analysis, I draw heavily on a unique data set. Information on the universe of Chinese manufacturing firms is being linked to the universe of Chinese trade transactions. These are unique research tools on their own, but as a linked data set, the only comparable one in the world is for the U.S. economy. The Chinese data has the advantage to contain detailed information on FDI, distinguishes between ordinary and processing trade, and contains information on innovation, such as R&D and sales of new goods. Answering the above questions is important for other developing countries wanting to learn from China s experience and for Western firms assessing how quickly Chinese firms will become viable suppliers of sophisticated inputs or direct competitors. By estimating models that are explicitly derived from new theories, I advance the literature at the interaction of international and development economics, industrial organization, economic geography.
Summary
China s economy has expanded at breakneck speed to become the 3rd largest trading country in the world and the largest recipient of foreign direct investment (FDI). Entry into the WTO in 2001 was a landmark event in this ongoing process and I propose to study several channels through which it spurred China s industrial development. Crucially, I will take an integrated view of the different ways in which Chinese and Western firms interact: through trade flows, as suppliers or competitors, FDI, or knowledge transfers. First, I investigate the existence and magnitude of a causal link from the trade reforms to productivity growth. Second, I look for evidence of capability upgrading, such as increased production efficiency, an ability to produce higher quality products, or introduce new products by innovating. Third, I study the mechanisms for the impact of trade and FDI on local firms, in particular assessing the relative importance of increased market competition and the transfer of know-how from foreign firms. For this analysis, I draw heavily on a unique data set. Information on the universe of Chinese manufacturing firms is being linked to the universe of Chinese trade transactions. These are unique research tools on their own, but as a linked data set, the only comparable one in the world is for the U.S. economy. The Chinese data has the advantage to contain detailed information on FDI, distinguishes between ordinary and processing trade, and contains information on innovation, such as R&D and sales of new goods. Answering the above questions is important for other developing countries wanting to learn from China s experience and for Western firms assessing how quickly Chinese firms will become viable suppliers of sophisticated inputs or direct competitors. By estimating models that are explicitly derived from new theories, I advance the literature at the interaction of international and development economics, industrial organization, economic geography.
Max ERC Funding
944 940 €
Duration
Start date: 2010-02-01, End date: 2016-01-31
Project acronym COCOON
Project Conformal coating of nanoporous materials
Researcher (PI) Christophe Detavernier
Host Institution (HI) UNIVERSITEIT GENT
Call Details Starting Grant (StG), PE8, ERC-2009-StG
Summary CONTEXT - Nanoporous structures are used for application in catalysis, molecular separation, fuel cells, dye sensitized solar cells etc. Given the near molecular size of the porous network, it is extremely challenging to modify the interior surface of the pores after the nanoporous material has been synthesized.
THIS PROPOSAL - Atomic Layer Deposition (ALD) is envisioned as a novel technique for creating catalytically active sites and for controlling the pore size distribution in nanoporous materials. ALD is a self-limited growth method that is characterized by alternating exposure of the growing film to precursor vapours, resulting in the sequential deposition of (sub)monolayers. It provides atomic level control of thickness and composition, and is currently used in micro-electronics to grow films into structures with aspect ratios of up to 100 / 1. We aim to make the fundamental breakthroughs necessary to enable atomic layer deposition to engineer the composition, size and shape of the interior surface of nanoporous materials with aspect ratios in excess of 10,000 / 1.
POTENTIAL IMPACT Achieving these objectives will enable atomic level engineering of the interior surface of any porous material. We plan to focus on three specific applications where our results will have both medium and long term impacts:
- Engineering the composition of pore walls using ALD, e.g. to create catalytic sites (e.g. Al for acid sites, Ti for redox sites, or Pt, Pd or Ni)
- chemical functionalization of the pore walls with atomic level control can result in breakthrough applications in the fields of catalysis and sensors.
- Atomic level control of the size of nanopores through ALD controlling the pore size distribution of molecular sieves can potentially lead to breakthrough applications in molecular separation and filtration.
- Nanocasting replication of a mesoporous template by means of ALD can result in the mass-scale production of nanotubes.
Summary
CONTEXT - Nanoporous structures are used for application in catalysis, molecular separation, fuel cells, dye sensitized solar cells etc. Given the near molecular size of the porous network, it is extremely challenging to modify the interior surface of the pores after the nanoporous material has been synthesized.
THIS PROPOSAL - Atomic Layer Deposition (ALD) is envisioned as a novel technique for creating catalytically active sites and for controlling the pore size distribution in nanoporous materials. ALD is a self-limited growth method that is characterized by alternating exposure of the growing film to precursor vapours, resulting in the sequential deposition of (sub)monolayers. It provides atomic level control of thickness and composition, and is currently used in micro-electronics to grow films into structures with aspect ratios of up to 100 / 1. We aim to make the fundamental breakthroughs necessary to enable atomic layer deposition to engineer the composition, size and shape of the interior surface of nanoporous materials with aspect ratios in excess of 10,000 / 1.
POTENTIAL IMPACT Achieving these objectives will enable atomic level engineering of the interior surface of any porous material. We plan to focus on three specific applications where our results will have both medium and long term impacts:
- Engineering the composition of pore walls using ALD, e.g. to create catalytic sites (e.g. Al for acid sites, Ti for redox sites, or Pt, Pd or Ni)
- chemical functionalization of the pore walls with atomic level control can result in breakthrough applications in the fields of catalysis and sensors.
- Atomic level control of the size of nanopores through ALD controlling the pore size distribution of molecular sieves can potentially lead to breakthrough applications in molecular separation and filtration.
- Nanocasting replication of a mesoporous template by means of ALD can result in the mass-scale production of nanotubes.
Max ERC Funding
1 432 800 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym COGNIMUND
Project Cognitive Image Understanding: Image representations and Multimodal learning
Researcher (PI) Tinne Tuytelaars
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary One of the primary and most appealing goals of computer vision is to automatically understand the content of images on a cognitive level. Ultimately we want to have computers interpret images as we humans do, recognizing all the objects, scenes, and people as well as their relations as they appear in natural images or video. With this project, I want to advance the state of the art in this field in two directions, which I believe to be crucial to build the next generation of image understanding tools. First, novel more robust yet descriptive image representations will be designed, that incorporate the intrinsic structure of images. These should already go a long way towards removing irrelevant sources of variability while capturing the essence of the image content. I believe the importance of further research into image representations is currently underestimated within the research community, yet I claim this is a crucial step with lots of opportunities good learning cannot easily make up for bad features. Second, weakly supervised methods to learn from multimodal input (especially the combination of images and text) will be investigated, making it possible to leverage the large amount of weak annotations available via the internet. This is essential if we want to scale the methods to a larger number of object categories (several hundreds instead of a few tens). As more data can be used for training, such weakly supervised methods might in the end even come on par with or outperform supervised schemes. Here we will call upon the latest results in semi-supervised learning, datamining, and computational linguistics.
Summary
One of the primary and most appealing goals of computer vision is to automatically understand the content of images on a cognitive level. Ultimately we want to have computers interpret images as we humans do, recognizing all the objects, scenes, and people as well as their relations as they appear in natural images or video. With this project, I want to advance the state of the art in this field in two directions, which I believe to be crucial to build the next generation of image understanding tools. First, novel more robust yet descriptive image representations will be designed, that incorporate the intrinsic structure of images. These should already go a long way towards removing irrelevant sources of variability while capturing the essence of the image content. I believe the importance of further research into image representations is currently underestimated within the research community, yet I claim this is a crucial step with lots of opportunities good learning cannot easily make up for bad features. Second, weakly supervised methods to learn from multimodal input (especially the combination of images and text) will be investigated, making it possible to leverage the large amount of weak annotations available via the internet. This is essential if we want to scale the methods to a larger number of object categories (several hundreds instead of a few tens). As more data can be used for training, such weakly supervised methods might in the end even come on par with or outperform supervised schemes. Here we will call upon the latest results in semi-supervised learning, datamining, and computational linguistics.
Max ERC Funding
1 538 380 €
Duration
Start date: 2010-02-01, End date: 2015-01-31
Project acronym COLLREGEN
Project Collagen scaffolds for bone regeneration: applied biomaterials, bioreactor and stem cell technology
Researcher (PI) Fergal Joseph O'brien
Host Institution (HI) ROYAL COLLEGE OF SURGEONS IN IRELAND
Call Details Starting Grant (StG), PE8, ERC-2009-StG
Summary Regenerative medicine aims to regenerate damaged tissues by developing functional cell, tissue, and organ substitutes to repair, replace or enhance biological function in damaged tissues. The focus of this research programme is to develop bone graft substitute biomaterials and laboratory-engineered bone tissue for implantation in damaged sites. At a simplistic level, biological tissues consist of cells, signalling mechanisms and extracellular matrix. Regenerative medicine/tissue engineering technologies are based on this biological triad and involve the successful interaction between three components: the scaffold that holds the cells together to create the tissues physical form, the cells that create the tissue, and the biological signalling mechanisms (such as growth factors or bioreactors) that direct the cells to express the desired tissue phenotype. The research proposed in this project includes specific projects in all three areas. The programme will be centred on the collagen-based biomaterials developed in the applicant s laboratory and will incorporate cutting edge stem cell technologies, growth factor delivery, gene therapy and bioreactor technology which will translate to in vivo tissue repair. This translational research programme will be divided into four specific themes: (i) development of novel osteoinductive and angiogenic smart scaffolds for bone tissue regeneration, (ii) scaffold and stem cell therapies for bone tissue regeneration, (iii) bone tissue engineering using a flow perfusion bioreactor and (iv) in vivo bone repair using engineered bone and smart scaffolds.
Summary
Regenerative medicine aims to regenerate damaged tissues by developing functional cell, tissue, and organ substitutes to repair, replace or enhance biological function in damaged tissues. The focus of this research programme is to develop bone graft substitute biomaterials and laboratory-engineered bone tissue for implantation in damaged sites. At a simplistic level, biological tissues consist of cells, signalling mechanisms and extracellular matrix. Regenerative medicine/tissue engineering technologies are based on this biological triad and involve the successful interaction between three components: the scaffold that holds the cells together to create the tissues physical form, the cells that create the tissue, and the biological signalling mechanisms (such as growth factors or bioreactors) that direct the cells to express the desired tissue phenotype. The research proposed in this project includes specific projects in all three areas. The programme will be centred on the collagen-based biomaterials developed in the applicant s laboratory and will incorporate cutting edge stem cell technologies, growth factor delivery, gene therapy and bioreactor technology which will translate to in vivo tissue repair. This translational research programme will be divided into four specific themes: (i) development of novel osteoinductive and angiogenic smart scaffolds for bone tissue regeneration, (ii) scaffold and stem cell therapies for bone tissue regeneration, (iii) bone tissue engineering using a flow perfusion bioreactor and (iv) in vivo bone repair using engineered bone and smart scaffolds.
Max ERC Funding
1 999 530 €
Duration
Start date: 2009-11-01, End date: 2015-09-30
Project acronym COUNTATOMS
Project Counting Atoms in nanomaterials
Researcher (PI) Gustaaf Van Tendeloo
Host Institution (HI) UNIVERSITEIT ANTWERPEN
Call Details Advanced Grant (AdG), PE5, ERC-2009-AdG
Summary COUNTING ATOMS IN NANOMATERIALS Advanced electron microscopy for solid state materials has evolved from a qualitative imaging setup to a quantitative scientific technique. This will allow us not only to probe and better understand the fundamental behaviour of (nano) materials at an atomic level but also to guide technology towards new horizons. The installation in 2009 of a new and unique electron microscope with a real space resolution of 50 pm and an energy resolution of 100 meV will make it possible to perform unique experiments. We believe that the position of atoms at an interface or at a surface can be determined with a precision of 1 pm; this precision is essential as input for modelling the materials properties. It will be first applied to explain the fascinating behaviour of multilayer ceramic materials. The new experimental limits will also allow us to literally count the number of atoms within an atomic columns; particularly counting the number of foreign atoms. This will not only require experimental skills, but also theoretical support. A real challenge is probing the magnetic and electronic information of a single atom column. According to theory this would be possible using ultra high resolution. This new probing technique will be of extreme importance for e.g. spintronics. Modern (nano) technology more and more requires information in 3 dimensions (3D), rather than in 2D. This is possible through electron tomography; this technique will be optimised in order to obtain sub nanometer precision. A final challenge is the study of the interface between soft matter (bio- or organic materials) and hard matter. This was hitherto impossible because of the radiation damage of the electron beam. With the possibility to lower the voltage to 80 kV and possibly 50 kV, maintaining more or less the resolution, we will hopefully be able to probe the active sites for catalysis.
Summary
COUNTING ATOMS IN NANOMATERIALS Advanced electron microscopy for solid state materials has evolved from a qualitative imaging setup to a quantitative scientific technique. This will allow us not only to probe and better understand the fundamental behaviour of (nano) materials at an atomic level but also to guide technology towards new horizons. The installation in 2009 of a new and unique electron microscope with a real space resolution of 50 pm and an energy resolution of 100 meV will make it possible to perform unique experiments. We believe that the position of atoms at an interface or at a surface can be determined with a precision of 1 pm; this precision is essential as input for modelling the materials properties. It will be first applied to explain the fascinating behaviour of multilayer ceramic materials. The new experimental limits will also allow us to literally count the number of atoms within an atomic columns; particularly counting the number of foreign atoms. This will not only require experimental skills, but also theoretical support. A real challenge is probing the magnetic and electronic information of a single atom column. According to theory this would be possible using ultra high resolution. This new probing technique will be of extreme importance for e.g. spintronics. Modern (nano) technology more and more requires information in 3 dimensions (3D), rather than in 2D. This is possible through electron tomography; this technique will be optimised in order to obtain sub nanometer precision. A final challenge is the study of the interface between soft matter (bio- or organic materials) and hard matter. This was hitherto impossible because of the radiation damage of the electron beam. With the possibility to lower the voltage to 80 kV and possibly 50 kV, maintaining more or less the resolution, we will hopefully be able to probe the active sites for catalysis.
Max ERC Funding
2 000 160 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym E-SWARM
Project Engineering Swarm Intelligence Systems
Researcher (PI) Marco Dorigo
Host Institution (HI) UNIVERSITE LIBRE DE BRUXELLES
Call Details Advanced Grant (AdG), PE6, ERC-2009-AdG
Summary Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. In this project, we focus on the design and implementation of artificial swarm intelligence systems for the solution of complex problems. Our current understanding of how to use swarms of artificial agents largely relies on rules of thumb and intuition based on the experience of individual researchers. This is not sufficient for us to design swarm intelligence systems at the level of complexity required by many real-world applications, or to accurately predict the behavior of the systems we design. The goal of the E-SWARM is to develop a rigorous engineering methodology for the design and implementation of artificial swarm intelligence systems. We believe that in the future, swarm intelligence will be an important tool for researchers and engineers interested in solving certain classes of complex problems. To build the foundations of this discipline and to develop an appropriate methodology, we will proceed in parallel both at an abstract level and by tackling a number of challenging problems in selected research domains. The research domains we have chosen are optimization, robotics, networks, and data mining.
Summary
Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. In this project, we focus on the design and implementation of artificial swarm intelligence systems for the solution of complex problems. Our current understanding of how to use swarms of artificial agents largely relies on rules of thumb and intuition based on the experience of individual researchers. This is not sufficient for us to design swarm intelligence systems at the level of complexity required by many real-world applications, or to accurately predict the behavior of the systems we design. The goal of the E-SWARM is to develop a rigorous engineering methodology for the design and implementation of artificial swarm intelligence systems. We believe that in the future, swarm intelligence will be an important tool for researchers and engineers interested in solving certain classes of complex problems. To build the foundations of this discipline and to develop an appropriate methodology, we will proceed in parallel both at an abstract level and by tackling a number of challenging problems in selected research domains. The research domains we have chosen are optimization, robotics, networks, and data mining.
Max ERC Funding
2 016 000 €
Duration
Start date: 2010-06-01, End date: 2015-05-31
Project acronym ECHR
Project Strengthening the European Court of Human Rights: More Accountability Through Better Legal Reasoning
Researcher (PI) Eva Brems
Host Institution (HI) UNIVERSITEIT GENT
Call Details Starting Grant (StG), SH2, ERC-2009-StG
Summary Human rights are under pressure, in Europe as elsewhere, due to several developments, namely [1] War on terror: the pressures generated by competing discourses [2] Coping with the dangers of rights inflation [3] Conflicting rights: how to handle rights as contested claims [4] The challenges of dealing with universality under fire In this context, the human rights leadership of the European Court of Human Rights is of crucial importance. Yet the Court is not fit for purpose. Inconsistencies and sloppy legal reasoning undermine both its credibility and the impact of its decisions. The research programme that I propose will strengthen the consistency and persuasiveness of Court s legal reasoning so as to improve its accountability and transparency. My aim is to identify new technical solutions for important human rights problems, by the development and application of creative methodologies. The substantive innovations within the field of European human rights law that I propose to make are: [a] the development of new legal tools, which will consistently integrate the accommodation of the particularities of non-dominant groups into the reasoning of the European Court of Human Rights [b] the development of a new theoretical framework combining minimum and maximum approaches to human rights protection, followed by its translation into clear legal criteria for use by the European Court of Human Rights [c] the development of a script that will enable the adoption of a consistent approach by the European Court of Human Rights to conflicts between human rights My methodological approach is characterized by the combination of empirical and normative dimensions, a 360° comparison, and the integration of qualitative research methods (interviews and focus groups with key stakeholders).
Summary
Human rights are under pressure, in Europe as elsewhere, due to several developments, namely [1] War on terror: the pressures generated by competing discourses [2] Coping with the dangers of rights inflation [3] Conflicting rights: how to handle rights as contested claims [4] The challenges of dealing with universality under fire In this context, the human rights leadership of the European Court of Human Rights is of crucial importance. Yet the Court is not fit for purpose. Inconsistencies and sloppy legal reasoning undermine both its credibility and the impact of its decisions. The research programme that I propose will strengthen the consistency and persuasiveness of Court s legal reasoning so as to improve its accountability and transparency. My aim is to identify new technical solutions for important human rights problems, by the development and application of creative methodologies. The substantive innovations within the field of European human rights law that I propose to make are: [a] the development of new legal tools, which will consistently integrate the accommodation of the particularities of non-dominant groups into the reasoning of the European Court of Human Rights [b] the development of a new theoretical framework combining minimum and maximum approaches to human rights protection, followed by its translation into clear legal criteria for use by the European Court of Human Rights [c] the development of a script that will enable the adoption of a consistent approach by the European Court of Human Rights to conflicts between human rights My methodological approach is characterized by the combination of empirical and normative dimensions, a 360° comparison, and the integration of qualitative research methods (interviews and focus groups with key stakeholders).
Max ERC Funding
1 370 000 €
Duration
Start date: 2009-11-01, End date: 2014-10-31
Project acronym ITOP
Project Integrated Theory and Observations of the Pleistocene
Researcher (PI) Michel Crucifix
Host Institution (HI) UNIVERSITE CATHOLIQUE DE LOUVAIN
Call Details Starting Grant (StG), PE10, ERC-2009-StG
Summary There are essentially two approaches to climate modelling. Over the past decades, efforts to understand climate dynamics have been dominated by computationally-intensive modelling aiming to include all possible processes, essentially by integrating the equations for the relevant physics. This is the bottom-up approach. However, even the largest models include many approximations and the cumulative effect of these approximations make it impossible to predict the evolution of climate over several tens of thousands of years. For this reason a more phenomenological approach is also useful. It consists in identifying coherent spatio-temporal structures in the climate time-series in order to understand how they interact. Theoretically, the two approaches focus on different levels of information and they should be complementary. In practice, they are generally perceived to be in philosophical opposition and there is no unifying methodological framework. Our ambition is to provide this methodological framework with a focus on climate dynamics at the scale of the Pleistocene (last 2 million years). We pursue a triple objective (1) the framework must be rigorous but flexible enough to test competing theories of ice ages (2) it must avoid circular reasonings associated with ``tuning'' (3) it must provide a credible basis to unify our knowledge of climate dynamics and provide a state-of-the-art ``prediction horizon''. To this end we propose a methodology spanning different but complementary disciplines: physical climatology, empirical palaeoclimatology, dynamical system analysis and applied Bayesian statistics. It is intended to have a wide applicability in climate science where there is an interest in using reduced-order representations of the climate system.
Summary
There are essentially two approaches to climate modelling. Over the past decades, efforts to understand climate dynamics have been dominated by computationally-intensive modelling aiming to include all possible processes, essentially by integrating the equations for the relevant physics. This is the bottom-up approach. However, even the largest models include many approximations and the cumulative effect of these approximations make it impossible to predict the evolution of climate over several tens of thousands of years. For this reason a more phenomenological approach is also useful. It consists in identifying coherent spatio-temporal structures in the climate time-series in order to understand how they interact. Theoretically, the two approaches focus on different levels of information and they should be complementary. In practice, they are generally perceived to be in philosophical opposition and there is no unifying methodological framework. Our ambition is to provide this methodological framework with a focus on climate dynamics at the scale of the Pleistocene (last 2 million years). We pursue a triple objective (1) the framework must be rigorous but flexible enough to test competing theories of ice ages (2) it must avoid circular reasonings associated with ``tuning'' (3) it must provide a credible basis to unify our knowledge of climate dynamics and provide a state-of-the-art ``prediction horizon''. To this end we propose a methodology spanning different but complementary disciplines: physical climatology, empirical palaeoclimatology, dynamical system analysis and applied Bayesian statistics. It is intended to have a wide applicability in climate science where there is an interest in using reduced-order representations of the climate system.
Max ERC Funding
1 047 600 €
Duration
Start date: 2009-09-01, End date: 2014-08-31
Project acronym JAPANGREATDEPRESSION
Project 'Dead End': An Economic and Cultural History of Japan in the Age of the Great Depression, 1927-1937
Researcher (PI) Michael Schiltz
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), SH6, ERC-2009-StG
Summary The project presents an economic history and socio-cultural reconstruction of Japan in the age of the great depression; it is an attempt to demonstrate the depression's 'total' or multicontextual implications by outlining different but complimentary views of what was defined as the depression's core problems (and their possible solutions) within different social classes and within different strands of thought. Seen in historical perspective, it covers the period from the 'ShMwa financial crisis' (1927) until the outbreak of the second Sino-Japanese War (1937). The project consists out of three components: First, it addresses the macro-economic ideas in vogue at the time. It specifically concentrates on the personalities and roles of finance ministers Inoue Junnosuke K© and especially Takahashi Korekiyo ØK/ ('Japan's Keynes'), who has widely been credited for smoothening the role of the global depression on the Japanese economy. The second part of the project rests with the origins of depression in Japan's official and semi-official colonies in 1927 and the role the latter played in fueling the later crisis on the Japanese mainland. The project investigates the role of speculation, and inquires to which degree the effects of depression were 'imported' from the subsidiary economies of Taiwan, the Korean peninsula, and Manchuria. Third, as this project has a strong focus on the role economic realities were identified ('semantics'), it also develops a cultural history of the age of depression. The project identifies the rise of a new vocabulary and discourse in an era obsessed with the idea of an economic and moral dead end (ikizumari).
Summary
The project presents an economic history and socio-cultural reconstruction of Japan in the age of the great depression; it is an attempt to demonstrate the depression's 'total' or multicontextual implications by outlining different but complimentary views of what was defined as the depression's core problems (and their possible solutions) within different social classes and within different strands of thought. Seen in historical perspective, it covers the period from the 'ShMwa financial crisis' (1927) until the outbreak of the second Sino-Japanese War (1937). The project consists out of three components: First, it addresses the macro-economic ideas in vogue at the time. It specifically concentrates on the personalities and roles of finance ministers Inoue Junnosuke K© and especially Takahashi Korekiyo ØK/ ('Japan's Keynes'), who has widely been credited for smoothening the role of the global depression on the Japanese economy. The second part of the project rests with the origins of depression in Japan's official and semi-official colonies in 1927 and the role the latter played in fueling the later crisis on the Japanese mainland. The project investigates the role of speculation, and inquires to which degree the effects of depression were 'imported' from the subsidiary economies of Taiwan, the Korean peninsula, and Manchuria. Third, as this project has a strong focus on the role economic realities were identified ('semantics'), it also develops a cultural history of the age of depression. The project identifies the rise of a new vocabulary and discourse in an era obsessed with the idea of an economic and moral dead end (ikizumari).
Max ERC Funding
549 442 €
Duration
Start date: 2009-10-01, End date: 2014-09-30
Project acronym KINPOR
Project First principle chemical kinetics in nanoporous materials
Researcher (PI) Veronique Van Speybroeck
Host Institution (HI) UNIVERSITEIT GENT
Call Details Starting Grant (StG), PE4, ERC-2009-StG
Summary The design of an optimal catalyst for a given process is at the heart of what chemists do, but is in many times more an art than a science. The quest for molecular control of any, either existing or new, production process is one of the great challenges nowadays. The need for accurate rate constants is crucial to fulfil this task. Molecular modelling has become a ubiquitous tool in many fields of science and engineering, but still the calculation of reaction rates in nanoporous materials is hardly performed due to major methodological bottlenecks. The aim of this proposal is the accurate prediction of chemical kinetics of catalytic reactions taking place in nanoporous materials from first principles. Two types of industrially important nanoporous materials are considered: zeotype materials including the standard alumino-silicates but also related alumino-phosphates and the fairly new Metal-Organic Frameworks (MOFs). New physical models are proposed to determine: (i) accurate reaction barriers that account for long range host/guest interactions and (ii)the preexponential factor within a harmonic and anharmonic description, using cluster and periodic models and by means of static and dynamic approaches. The applications are carefully selected to benchmark the influence of each of the methodological issues on the final reaction rates. For the zeotype materials, reactions taking place during the Methanol-to-Olefin process (MTO) are chosen. A typical MTO catalyst is composed of an inorganic cage with essential organic compounds interacting as a supramolecular catalyst. For the hybrid materials, firstly accurate interaction energies between xylene based isomers and MOF framework, will be determined. The outcome serves as a step-stone for the study of oxidation reactions. This proposal creates perspectives for the design of tailor made catalyst from the molecular level.
Summary
The design of an optimal catalyst for a given process is at the heart of what chemists do, but is in many times more an art than a science. The quest for molecular control of any, either existing or new, production process is one of the great challenges nowadays. The need for accurate rate constants is crucial to fulfil this task. Molecular modelling has become a ubiquitous tool in many fields of science and engineering, but still the calculation of reaction rates in nanoporous materials is hardly performed due to major methodological bottlenecks. The aim of this proposal is the accurate prediction of chemical kinetics of catalytic reactions taking place in nanoporous materials from first principles. Two types of industrially important nanoporous materials are considered: zeotype materials including the standard alumino-silicates but also related alumino-phosphates and the fairly new Metal-Organic Frameworks (MOFs). New physical models are proposed to determine: (i) accurate reaction barriers that account for long range host/guest interactions and (ii)the preexponential factor within a harmonic and anharmonic description, using cluster and periodic models and by means of static and dynamic approaches. The applications are carefully selected to benchmark the influence of each of the methodological issues on the final reaction rates. For the zeotype materials, reactions taking place during the Methanol-to-Olefin process (MTO) are chosen. A typical MTO catalyst is composed of an inorganic cage with essential organic compounds interacting as a supramolecular catalyst. For the hybrid materials, firstly accurate interaction energies between xylene based isomers and MOF framework, will be determined. The outcome serves as a step-stone for the study of oxidation reactions. This proposal creates perspectives for the design of tailor made catalyst from the molecular level.
Max ERC Funding
1 150 000 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym LIMOD
Project The Limits of Demobilization, 1917-1923: Paramilitary Violence in Europe and the Wider World
Researcher (PI) Robert Benjamin Gerwarth
Host Institution (HI) UNIVERSITY COLLEGE DUBLIN, NATIONAL UNIVERSITY OF IRELAND, DUBLIN
Call Details Starting Grant (StG), SH6, ERC-2009-StG
Summary The purpose of the proposed project is to think afresh about the violent aftermath of the Great War and its legacies. This will be achieved by forging a team of researchers who focus on the violent conflicts that erupted in many of the former combatant states after 1917/18 from a comparative or transnational global perspective and the ways in which these conflicts were avoided in other areas. The project will differ from previous attempts to analyse the violent transition from war to peace in this period in several ways: The first is its comparative and transational complexion. Despite recent attempts to write transnational histories of the Great War, the global history of its immediate aftermath is yet to be written. War and the politics of conflicts (and its aftermaths) are still largely studied according to divisions of national identity or ethnic difference. And yet clearly the First World War was a phenomenon that crossed frontiers and left legacies that posessed common themes. Indeed one of its consequences, especially in East-Central Europe but also in the shatter-zones of the Ottoman Empire and colonial contexts, was the destruction of frontiers, creating spaces without order or unquestioned government authority. The project will thus approach its subject matter by zones of victory, of defeat, and of mutilated or ambivalent victories rather than nation-states as a novel way of overcoming nation-centric frameworks of analysis. In terms of chronological scope, the investigation moves away from the traditional emphasis on the years 1914-18 as the crucible years of twentieth-century history. Furthermore, the project is at once European and global, investigating the emergence of violent conflicts in both the shatter-zones of European land empires and colonial conflicts.
Summary
The purpose of the proposed project is to think afresh about the violent aftermath of the Great War and its legacies. This will be achieved by forging a team of researchers who focus on the violent conflicts that erupted in many of the former combatant states after 1917/18 from a comparative or transnational global perspective and the ways in which these conflicts were avoided in other areas. The project will differ from previous attempts to analyse the violent transition from war to peace in this period in several ways: The first is its comparative and transational complexion. Despite recent attempts to write transnational histories of the Great War, the global history of its immediate aftermath is yet to be written. War and the politics of conflicts (and its aftermaths) are still largely studied according to divisions of national identity or ethnic difference. And yet clearly the First World War was a phenomenon that crossed frontiers and left legacies that posessed common themes. Indeed one of its consequences, especially in East-Central Europe but also in the shatter-zones of the Ottoman Empire and colonial contexts, was the destruction of frontiers, creating spaces without order or unquestioned government authority. The project will thus approach its subject matter by zones of victory, of defeat, and of mutilated or ambivalent victories rather than nation-states as a novel way of overcoming nation-centric frameworks of analysis. In terms of chronological scope, the investigation moves away from the traditional emphasis on the years 1914-18 as the crucible years of twentieth-century history. Furthermore, the project is at once European and global, investigating the emergence of violent conflicts in both the shatter-zones of European land empires and colonial conflicts.
Max ERC Funding
1 199 386 €
Duration
Start date: 2009-09-01, End date: 2014-02-28
Project acronym MIGRANT
Project Mining Graphs and Networks: a Theory-based approach
Researcher (PI) Jan Ramon
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary In this project we aim at formulating enhancing theoretical foundations for the emerging field of graph mining. Graph mining is the field concerned with extracting interesting patterns and knowledge from graph or network structured data, such as can be found in chemistry, bioinformatics, the world wide web, social networks etc. Recent work has shown that many standard data mining techniques can be extended to structured data and can yield interesting results, but also that when applied to complex real-world data, these standard techniques often become computationally intractable. In this project we aim at providing a better understanding of the complexity of the tasks considered in the field of graph mining, and at proposing techniques to better exploit the properties of the data. To this aim, we will bring together insights from the fields of data mining, graph theory, learning theory and different application fields, and add our own original contributions. Key features of the methodology include the ground-breaking integration of insights from graph theory in data mining and learning approaches, the development of efficient prototype algorithms, and the interdisciplinary collaboration with application domain experts to validate the practical value of the work, This potential impact of this project is significant, as it will be the first systematic study of the theory of graph mining, it will provide foundations on which later research can build further and it will have applications in the many domains with complex data.
Summary
In this project we aim at formulating enhancing theoretical foundations for the emerging field of graph mining. Graph mining is the field concerned with extracting interesting patterns and knowledge from graph or network structured data, such as can be found in chemistry, bioinformatics, the world wide web, social networks etc. Recent work has shown that many standard data mining techniques can be extended to structured data and can yield interesting results, but also that when applied to complex real-world data, these standard techniques often become computationally intractable. In this project we aim at providing a better understanding of the complexity of the tasks considered in the field of graph mining, and at proposing techniques to better exploit the properties of the data. To this aim, we will bring together insights from the fields of data mining, graph theory, learning theory and different application fields, and add our own original contributions. Key features of the methodology include the ground-breaking integration of insights from graph theory in data mining and learning approaches, the development of efficient prototype algorithms, and the interdisciplinary collaboration with application domain experts to validate the practical value of the work, This potential impact of this project is significant, as it will be the first systematic study of the theory of graph mining, it will provide foundations on which later research can build further and it will have applications in the many domains with complex data.
Max ERC Funding
1 716 066 €
Duration
Start date: 2009-12-01, End date: 2015-05-31
Project acronym MMS
Project The Mamlukisation of the Mamluk Sultanate. Political Traditions and State Formation in 15th century Egypt and Syria
Researcher (PI) Jo Van Steenbergen
Host Institution (HI) UNIVERSITEIT GENT
Call Details Starting Grant (StG), SH6, ERC-2009-StG
Summary I aim to radically reconsider standard views of late medieval Islamic history. Positing that prosopographical research will allow for a welcome reconstruction of the political traditions that dominated the Syro-Egyptian Mamluk sultanate in the 15th century, I endeavour to show how new traditions emerged that were constructed around the criterion of military slavery, and how this actually reflects a process of state formation, which puts this regime on a par with emerging European states.
Mamluk history (1250-1517) tends to be approached through a decline prism, as almost all studies presuppose that a static mamluk/military slavery system was the backbone of the political economy that came under increasing pressures from the 14th century onwards. In my research, I have demonstrated how this view of the 14th century, in particular, is totally incorrect, suggesting that it was only in the 15th century that crucial political transformations took place in the region.
My proposed research now aims to qualify the latter hypothesis and to reconstruct the dynamics of these transformations, via a thorough examination of the interplay between individuals, institutions, and social interactions in the course of 15th-century political events, as detailed in the massive corpus of contemporary source material. Results will be generated in three stages: via prosopographical study; through separate, but inter-related studies on the main research constituents (individuals, institutions, interaction); and in a book-length synthesis on political traditions.
In the longer term, validation of this hypothesis will enable me to address fundamental new questions in pre-modern (Islamic) history, as part of trans-cultural processes common to all Euro-Mediterranean core regions.
Summary
I aim to radically reconsider standard views of late medieval Islamic history. Positing that prosopographical research will allow for a welcome reconstruction of the political traditions that dominated the Syro-Egyptian Mamluk sultanate in the 15th century, I endeavour to show how new traditions emerged that were constructed around the criterion of military slavery, and how this actually reflects a process of state formation, which puts this regime on a par with emerging European states.
Mamluk history (1250-1517) tends to be approached through a decline prism, as almost all studies presuppose that a static mamluk/military slavery system was the backbone of the political economy that came under increasing pressures from the 14th century onwards. In my research, I have demonstrated how this view of the 14th century, in particular, is totally incorrect, suggesting that it was only in the 15th century that crucial political transformations took place in the region.
My proposed research now aims to qualify the latter hypothesis and to reconstruct the dynamics of these transformations, via a thorough examination of the interplay between individuals, institutions, and social interactions in the course of 15th-century political events, as detailed in the massive corpus of contemporary source material. Results will be generated in three stages: via prosopographical study; through separate, but inter-related studies on the main research constituents (individuals, institutions, interaction); and in a book-length synthesis on political traditions.
In the longer term, validation of this hypothesis will enable me to address fundamental new questions in pre-modern (Islamic) history, as part of trans-cultural processes common to all Euro-Mediterranean core regions.
Max ERC Funding
1 200 000 €
Duration
Start date: 2009-10-01, End date: 2014-09-30
Project acronym NARESCO
Project Novel paradigms for massively parallel nanophotonic information processing
Researcher (PI) Peter Bienstman
Host Institution (HI) UNIVERSITEIT GENT
Call Details Starting Grant (StG), PE7, ERC-2009-StG
Summary In this project we will develop nanophotonic reservoir computing as a novel paradigm for massively parallel information processing. Reservoir computing is a recently proposed methodology from the field of machine learning and neural networks which has been used successfully in several pattern classification problems, like speech and image recognition. However, it has so far mainly been used in a software implementation which limits its speed and power efficiency. Photonics could provide an excellent platform for such a hardware implementation, because of the presence of unique non-linear dynamics in photonics components due to the interplay of photons and electrons, and because light also has a phase in addition to an amplitude, which provides for an important additional degree of freedom as opposed to a purely electronic hardware implementation. Our aim is to bring together a multidisciplinary team of specialists in photonics and machine learning to make this vision of massively parallel information processing using nanophotonics a reality. We will achieve these aims by constructing a set of prototypes of ever increasing complexity which will be able to tackle ever more complex tasks. There is clear potential for these techniques to perform information processing that is beyond the limit of today's conventional computing processing power: high-throughput massively parallel classification problems, like e.g. processing radar data for road safety, or real time analysis of the data streams generated by the Large Hadron Collider.
Summary
In this project we will develop nanophotonic reservoir computing as a novel paradigm for massively parallel information processing. Reservoir computing is a recently proposed methodology from the field of machine learning and neural networks which has been used successfully in several pattern classification problems, like speech and image recognition. However, it has so far mainly been used in a software implementation which limits its speed and power efficiency. Photonics could provide an excellent platform for such a hardware implementation, because of the presence of unique non-linear dynamics in photonics components due to the interplay of photons and electrons, and because light also has a phase in addition to an amplitude, which provides for an important additional degree of freedom as opposed to a purely electronic hardware implementation. Our aim is to bring together a multidisciplinary team of specialists in photonics and machine learning to make this vision of massively parallel information processing using nanophotonics a reality. We will achieve these aims by constructing a set of prototypes of ever increasing complexity which will be able to tackle ever more complex tasks. There is clear potential for these techniques to perform information processing that is beyond the limit of today's conventional computing processing power: high-throughput massively parallel classification problems, like e.g. processing radar data for road safety, or real time analysis of the data streams generated by the Large Hadron Collider.
Max ERC Funding
1 260 000 €
Duration
Start date: 2010-01-01, End date: 2015-12-31
Project acronym REPEATSASMUTATORS
Project The biological role of tandem repeats as hypervariable modules in genomes
Researcher (PI) Kevin Joan Verstrepen
Host Institution (HI) VIB
Call Details Starting Grant (StG), LS2, ERC-2009-StG
Summary Living organisms change and evolve because of mutations in their DNA. Recent findings suggest that some DNA sequences are hypervariable and evolvable , while others are extremely robust and remain constant over evolutionary timescales. The long-term goal of our research is to combine theory and experiments to investigate the molecular mechanisms underlying genetic robustness and evolvability. Apart from the fundamental aspects, we also plan to explore practical facets, including swift evolution of pathogens and construction of hypervariable modules for synthetic biology. In this proposal we focus on one specific topic, namely the role of tandem repeats as hypervariable modules in genomes. Tandem repeats are short DNA sequences that are repeated head-to-tail. Such repeats have traditionally been considered as non-functional junk DNA and they are therefore mostly ignored. However, our ongoing research shows that tandem repeats often occur in coding and regulatory sequences. The repeats show mutation rates that are 10 to 10.000 fold higher than mutation rates in the rest of the genome. These frequent mutations alter the function and/or expression of genes, allowing organisms to swiftly adapt to novel environments. Hence, repeats may be a common mechanism for organisms to generate potentially beneficial variability in certain regions of the genome, while keeping other regions stable and robust (Rando and Verstrepen, Cell 128: 655; Verstrepen et al., Nature Genetics 37: 986; Verstrepen et al., Nature Microbiol. 2: 15). We propose a multidisciplinary systems approach to unravel the biological role of repeats. First, we will use bioinformatics to screen various model genomes and identify, categorize and analyze all tandem repeat loci in the model eukaryote Saccharomyces cerevisiae. Using this data, we will select a subset of repeats and apply experimental techniques to investigate the functional consequences of mutations in these repeats.
Summary
Living organisms change and evolve because of mutations in their DNA. Recent findings suggest that some DNA sequences are hypervariable and evolvable , while others are extremely robust and remain constant over evolutionary timescales. The long-term goal of our research is to combine theory and experiments to investigate the molecular mechanisms underlying genetic robustness and evolvability. Apart from the fundamental aspects, we also plan to explore practical facets, including swift evolution of pathogens and construction of hypervariable modules for synthetic biology. In this proposal we focus on one specific topic, namely the role of tandem repeats as hypervariable modules in genomes. Tandem repeats are short DNA sequences that are repeated head-to-tail. Such repeats have traditionally been considered as non-functional junk DNA and they are therefore mostly ignored. However, our ongoing research shows that tandem repeats often occur in coding and regulatory sequences. The repeats show mutation rates that are 10 to 10.000 fold higher than mutation rates in the rest of the genome. These frequent mutations alter the function and/or expression of genes, allowing organisms to swiftly adapt to novel environments. Hence, repeats may be a common mechanism for organisms to generate potentially beneficial variability in certain regions of the genome, while keeping other regions stable and robust (Rando and Verstrepen, Cell 128: 655; Verstrepen et al., Nature Genetics 37: 986; Verstrepen et al., Nature Microbiol. 2: 15). We propose a multidisciplinary systems approach to unravel the biological role of repeats. First, we will use bioinformatics to screen various model genomes and identify, categorize and analyze all tandem repeat loci in the model eukaryote Saccharomyces cerevisiae. Using this data, we will select a subset of repeats and apply experimental techniques to investigate the functional consequences of mutations in these repeats.
Max ERC Funding
1 753 527 €
Duration
Start date: 2009-12-01, End date: 2014-11-30
Project acronym RLPHARMFMRI
Project Beyond dopamine: Characterizing the computational functions of midbrain modulatory neurotransmitter systems in human reinforcement learning using model-based pharmacological fMRI
Researcher (PI) John O'doherty
Host Institution (HI) THE PROVOST, FELLOWS, FOUNDATION SCHOLARS & THE OTHER MEMBERS OF BOARD OF THE COLLEGE OF THE HOLY & UNDIVIDED TRINITY OF QUEEN ELIZABETH NEAR DUBLIN
Call Details Starting Grant (StG), LS5, ERC-2009-StG
Summary Understanding how humans and other animals are able to learn from experience and use this information to select future behavioural strategies to obtain the reinforcers necessary for survival, is a fundamental research question in biology. Considerable progress has been made in recent years on the neural computational underpinnings of this process following the observation that the phasic activity of dopamine neurons in the midbrain resembles a prediction error from a formal computational theory known as reinforcement learning (RL). While much is known about the functions of dopamine in RL, much less is known about the computational functions of other modulatory neurotransmitter systems in the midbrain such as the cholinergic, norcpinephrine, and serotonergic systems. The goal of this research proposal to the ERC, is to begin a systematic study of the computational functions of these other neurotransmitter systems (beyond dopamine) in RL. To do this we will combine functional magnetic resonance imaging in human subjects while they perform simple decision making tasks and undergo pharmacological manipulations to modulate systemic levels of these different neurotransmitter systems. We will combine computational model-based analyses with fMRI and behavioural data in order to explore the effects that these pharmacological modulations exert on different parameters and modules within RL. Specifically, we will test the contributions that the cholinergic system makes in setting the learning rate during RL and in mediating computations of expected uncertainty in the distribution of rewards available, we will test for the role of norepinephrine in balancing the rate of exploration and exploitation during decision making, as well as in encoding the level of unexpected uncertainty, and we will explore the possible role of serotonin in setting the rate of temporal discounting for reward, or in encoding prediction errors during aversive as opposed to reward-learning.
Summary
Understanding how humans and other animals are able to learn from experience and use this information to select future behavioural strategies to obtain the reinforcers necessary for survival, is a fundamental research question in biology. Considerable progress has been made in recent years on the neural computational underpinnings of this process following the observation that the phasic activity of dopamine neurons in the midbrain resembles a prediction error from a formal computational theory known as reinforcement learning (RL). While much is known about the functions of dopamine in RL, much less is known about the computational functions of other modulatory neurotransmitter systems in the midbrain such as the cholinergic, norcpinephrine, and serotonergic systems. The goal of this research proposal to the ERC, is to begin a systematic study of the computational functions of these other neurotransmitter systems (beyond dopamine) in RL. To do this we will combine functional magnetic resonance imaging in human subjects while they perform simple decision making tasks and undergo pharmacological manipulations to modulate systemic levels of these different neurotransmitter systems. We will combine computational model-based analyses with fMRI and behavioural data in order to explore the effects that these pharmacological modulations exert on different parameters and modules within RL. Specifically, we will test the contributions that the cholinergic system makes in setting the learning rate during RL and in mediating computations of expected uncertainty in the distribution of rewards available, we will test for the role of norepinephrine in balancing the rate of exploration and exploitation during decision making, as well as in encoding the level of unexpected uncertainty, and we will explore the possible role of serotonin in setting the rate of temporal discounting for reward, or in encoding prediction errors during aversive as opposed to reward-learning.
Max ERC Funding
1 841 404 €
Duration
Start date: 2010-01-01, End date: 2010-09-30
Project acronym TUMETABO
Project Glycolytic contribution to cancer growth and metastasis
Researcher (PI) Pierre Sonveaux
Host Institution (HI) UNIVERSITE CATHOLIQUE DE LOUVAIN
Call Details Starting Grant (StG), LS4, ERC-2009-StG
Summary Cancer lethality is most often associated to occurrence of distant metastases. To grow and become aggressive, cancers may undergo 2 critical adaptations: the glycolytic switch, corresponding to uncoupling glycolysis from the tricarboxylic acid (TCA) cycle, and the angiogenic switch, promoting neovascularization. In this high risk/high gain research program, we propose that the glycolytic switch precedes and promotes angiogenesis and metastatic dissemination in most types of cancer. We further envision that lactate, the end product of glycolysis, interfaces glycolysis and the latter processes through activation of hypoxia-inducible factor HIF-1. A thorough characterization of the molecular pathway(s) initiated by lactate (using transcriptomic, gene silencing, enzymatic and pharmacological interventions) has the potential to unravel new therapeutic targets that would simultaneously inhibit the consequences of the glycolytic switch on cancer aggressiveness. We anticipate the plasma membrane lactate transporters of the (sodium) monocarboxylate transporter (S)MCT family to be key determinants of autocrine and paracrine lactate signaling in cancer. Modulation of their activity or expression (notably by the generation of (S)MCT knock out mice) could thus profoundly affect tumor angiogenesis and metastasis. Since hypoxia is a hallmark of cancer and glycolysis its direct consequence in cancer cells surviving to hypoxia, the findings could have important consequences for the treatment of virtually all types of cancers. It could also impact our understanding of other pathologies, such as wound healing and heart infarction, in which the interplay between glycolysis, HIF-1 activation and angiogenesis could play a critical role.
Summary
Cancer lethality is most often associated to occurrence of distant metastases. To grow and become aggressive, cancers may undergo 2 critical adaptations: the glycolytic switch, corresponding to uncoupling glycolysis from the tricarboxylic acid (TCA) cycle, and the angiogenic switch, promoting neovascularization. In this high risk/high gain research program, we propose that the glycolytic switch precedes and promotes angiogenesis and metastatic dissemination in most types of cancer. We further envision that lactate, the end product of glycolysis, interfaces glycolysis and the latter processes through activation of hypoxia-inducible factor HIF-1. A thorough characterization of the molecular pathway(s) initiated by lactate (using transcriptomic, gene silencing, enzymatic and pharmacological interventions) has the potential to unravel new therapeutic targets that would simultaneously inhibit the consequences of the glycolytic switch on cancer aggressiveness. We anticipate the plasma membrane lactate transporters of the (sodium) monocarboxylate transporter (S)MCT family to be key determinants of autocrine and paracrine lactate signaling in cancer. Modulation of their activity or expression (notably by the generation of (S)MCT knock out mice) could thus profoundly affect tumor angiogenesis and metastasis. Since hypoxia is a hallmark of cancer and glycolysis its direct consequence in cancer cells surviving to hypoxia, the findings could have important consequences for the treatment of virtually all types of cancers. It could also impact our understanding of other pathologies, such as wound healing and heart infarction, in which the interplay between glycolysis, HIF-1 activation and angiogenesis could play a critical role.
Max ERC Funding
1 493 320 €
Duration
Start date: 2009-12-01, End date: 2014-11-30