Project acronym 1st-principles-discs
Project A First Principles Approach to Accretion Discs
Researcher (PI) Martin Elias Pessah
Host Institution (HI) KOBENHAVNS UNIVERSITET
Call Details Starting Grant (StG), PE9, ERC-2012-StG_20111012
Summary Most celestial bodies, from planets, to stars, to black holes; gain mass during their lives by means of an accretion disc. Understanding the physical processes that determine the rate at which matter accretes and energy is radiated in these discs is vital for unraveling the formation, evolution, and fate of almost every type of object in the Universe. Despite the fact that magnetic fields have been known to be crucial in accretion discs since the early 90’s, the majority of astrophysical questions that depend on the details of how disc accretion proceeds are still being addressed using the “standard” accretion disc model (developed in the early 70’s), where magnetic fields do not play an explicit role. This has prevented us from fully exploring the astrophysical consequences and observational signatures of realistic accretion disc models, leading to a profound disconnect between observations (usually interpreted with the standard paradigm) and modern accretion disc theory and numerical simulations (where magnetic turbulence is crucial). The goal of this proposal is to use several complementary approaches in order to finally move beyond the standard paradigm. This program has two main objectives: 1) Develop the theoretical framework to incorporate magnetic fields, and the ensuing turbulence, into self-consistent accretion disc models, and investigate their observational implications. 2) Investigate transport and radiative processes in collision-less disc regions, where non-thermal radiation originates, by employing a kinetic particle description of the plasma. In order to achieve these goals, we will use, and build upon, state-of-the-art magnetohydrodynamic and particle-in-cell codes in conjunction with theoretical modeling. This framework will make it possible to address fundamental questions on stellar and planet formation, binary systems with a compact object, and supermassive black hole feedback in a way that has no counterpart within the standard paradigm.
Summary
Most celestial bodies, from planets, to stars, to black holes; gain mass during their lives by means of an accretion disc. Understanding the physical processes that determine the rate at which matter accretes and energy is radiated in these discs is vital for unraveling the formation, evolution, and fate of almost every type of object in the Universe. Despite the fact that magnetic fields have been known to be crucial in accretion discs since the early 90’s, the majority of astrophysical questions that depend on the details of how disc accretion proceeds are still being addressed using the “standard” accretion disc model (developed in the early 70’s), where magnetic fields do not play an explicit role. This has prevented us from fully exploring the astrophysical consequences and observational signatures of realistic accretion disc models, leading to a profound disconnect between observations (usually interpreted with the standard paradigm) and modern accretion disc theory and numerical simulations (where magnetic turbulence is crucial). The goal of this proposal is to use several complementary approaches in order to finally move beyond the standard paradigm. This program has two main objectives: 1) Develop the theoretical framework to incorporate magnetic fields, and the ensuing turbulence, into self-consistent accretion disc models, and investigate their observational implications. 2) Investigate transport and radiative processes in collision-less disc regions, where non-thermal radiation originates, by employing a kinetic particle description of the plasma. In order to achieve these goals, we will use, and build upon, state-of-the-art magnetohydrodynamic and particle-in-cell codes in conjunction with theoretical modeling. This framework will make it possible to address fundamental questions on stellar and planet formation, binary systems with a compact object, and supermassive black hole feedback in a way that has no counterpart within the standard paradigm.
Max ERC Funding
1 793 697 €
Duration
Start date: 2013-02-01, End date: 2018-01-31
Project acronym 2STEPPARKIN
Project A novel two-step model for neurodegeneration in Parkinson’s disease
Researcher (PI) Emi Nagoshi
Host Institution (HI) UNIVERSITE DE GENEVE
Call Details Starting Grant (StG), LS5, ERC-2012-StG_20111109
Summary Parkinson’s disease (PD) is the second most common neurodegenerative disorder primarily caused by the progressive loss of dopaminergic (DA) neurons in the substantia nigra (SN). Despite the advances in gene discovery associated with PD, the knowledge of the PD pathogenesis is largely limited to the involvement of these genes in the generic cell death pathways, and why degeneration is specific to DA neurons and why the degeneration is progressive remain enigmatic. Broad goal of our work is therefore to elucidate the mechanisms underlying specific and progressive DA neuron degeneration in PD. Our new Drosophila model of PD ⎯Fer2 gene loss-of-function mutation⎯ is unusually well suited to address these questions. Fer2 mutants exhibit specific and progressive death of brain DA neurons as well as severe locomotor defects and short life span. Strikingly, the death of DA neuron is initiated in a small cluster of Fer2-expressing DA neurons and subsequently propagates to Fer2-negative DA neurons. We therefore propose a novel two-step model of the neurodegeneration in PD: primary cell death occurs in a specific subset of dopamindegic neurons that are genetically defined, and subsequently the failure of the neuronal connectivity triggers and propagates secondary cell death to remaining DA neurons. In this research, we will test this hypothesis and investigate the underlying molecular mechanisms. This will be the first study to examine circuit-dependency in DA neuron degeneration. Our approach will use a combination of non-biased genomic techniques and candidate-based screening, in addition to the powerful Drosophila genetic toolbox. Furthermore, to test this hypothesis beyond the Drosophila model, we will establish new mouse models of PD that exhibit progressive DA neuron degeneration. Outcome of this research will likely revolutionize the understanding of PD pathogenesis and open an avenue toward the discovery of effective therapy strategies against PD.
Summary
Parkinson’s disease (PD) is the second most common neurodegenerative disorder primarily caused by the progressive loss of dopaminergic (DA) neurons in the substantia nigra (SN). Despite the advances in gene discovery associated with PD, the knowledge of the PD pathogenesis is largely limited to the involvement of these genes in the generic cell death pathways, and why degeneration is specific to DA neurons and why the degeneration is progressive remain enigmatic. Broad goal of our work is therefore to elucidate the mechanisms underlying specific and progressive DA neuron degeneration in PD. Our new Drosophila model of PD ⎯Fer2 gene loss-of-function mutation⎯ is unusually well suited to address these questions. Fer2 mutants exhibit specific and progressive death of brain DA neurons as well as severe locomotor defects and short life span. Strikingly, the death of DA neuron is initiated in a small cluster of Fer2-expressing DA neurons and subsequently propagates to Fer2-negative DA neurons. We therefore propose a novel two-step model of the neurodegeneration in PD: primary cell death occurs in a specific subset of dopamindegic neurons that are genetically defined, and subsequently the failure of the neuronal connectivity triggers and propagates secondary cell death to remaining DA neurons. In this research, we will test this hypothesis and investigate the underlying molecular mechanisms. This will be the first study to examine circuit-dependency in DA neuron degeneration. Our approach will use a combination of non-biased genomic techniques and candidate-based screening, in addition to the powerful Drosophila genetic toolbox. Furthermore, to test this hypothesis beyond the Drosophila model, we will establish new mouse models of PD that exhibit progressive DA neuron degeneration. Outcome of this research will likely revolutionize the understanding of PD pathogenesis and open an avenue toward the discovery of effective therapy strategies against PD.
Max ERC Funding
1 518 960 €
Duration
Start date: 2013-06-01, End date: 2018-05-31
Project acronym ACTIVIA
Project Visual Recognition of Function and Intention
Researcher (PI) Ivan Laptev
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Computer vision is concerned with the automated interpretation of images and video streams. Today's research is (mostly) aimed at answering queries such as ""Is this a picture of a dog?"", (classification) or sometimes ""Find the dog in this photo"" (detection). While categorisation and detection are useful for many tasks, inferring correct class labels is not the final answer to visual recognition. The categories and locations of objects do not provide direct understanding of their function i.e., how things work, what they can be used for, or how they can act and react. Such an understanding, however, would be highly desirable to answer currently unsolvable queries such as ""Am I in danger?"" or ""What can happen in this scene?"". Solving such queries is the aim of this proposal.
My goal is to uncover the functional properties of objects and the purpose of actions by addressing visual recognition from a different and yet unexplored perspective. The main novelty of this proposal is to leverage observations of people, i.e., their actions and interactions to automatically learn the use, the purpose and the function of objects and scenes from visual data. The project is timely as it builds upon the two key recent technological advances: (a) the immense progress in visual recognition of objects, scenes and human actions achieved in the last ten years, as well as (b) the emergence of a massive amount of public image and video data now available to train visual models.
ACTIVIA addresses fundamental research issues in automated interpretation of dynamic visual scenes, but its results are expected to serve as a basis for ground-breaking technological advances in practical applications. The recognition of functional properties and intentions as explored in this project will directly support high-impact applications such as detection of abnormal events, which are likely to revolutionise today's approaches to crime protection, hazard prevention, elderly care, and many others."
Summary
"Computer vision is concerned with the automated interpretation of images and video streams. Today's research is (mostly) aimed at answering queries such as ""Is this a picture of a dog?"", (classification) or sometimes ""Find the dog in this photo"" (detection). While categorisation and detection are useful for many tasks, inferring correct class labels is not the final answer to visual recognition. The categories and locations of objects do not provide direct understanding of their function i.e., how things work, what they can be used for, or how they can act and react. Such an understanding, however, would be highly desirable to answer currently unsolvable queries such as ""Am I in danger?"" or ""What can happen in this scene?"". Solving such queries is the aim of this proposal.
My goal is to uncover the functional properties of objects and the purpose of actions by addressing visual recognition from a different and yet unexplored perspective. The main novelty of this proposal is to leverage observations of people, i.e., their actions and interactions to automatically learn the use, the purpose and the function of objects and scenes from visual data. The project is timely as it builds upon the two key recent technological advances: (a) the immense progress in visual recognition of objects, scenes and human actions achieved in the last ten years, as well as (b) the emergence of a massive amount of public image and video data now available to train visual models.
ACTIVIA addresses fundamental research issues in automated interpretation of dynamic visual scenes, but its results are expected to serve as a basis for ground-breaking technological advances in practical applications. The recognition of functional properties and intentions as explored in this project will directly support high-impact applications such as detection of abnormal events, which are likely to revolutionise today's approaches to crime protection, hazard prevention, elderly care, and many others."
Max ERC Funding
1 497 420 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym ADAPT
Project Theory and Algorithms for Adaptive Particle Simulation
Researcher (PI) Stephane Redon
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "During the twentieth century, the development of macroscopic engineering has been largely stimulated by progress in digital prototyping: cars, planes, boats, etc. are nowadays designed and tested on computers. Digital prototypes have progressively replaced actual ones, and effective computer-aided engineering tools have helped cut costs and reduce production cycles of these macroscopic systems.
The twenty-first century is most likely to see a similar development at the atomic scale. Indeed, the recent years have seen tremendous progress in nanotechnology - in particular in the ability to control matter at the atomic scale. Similar to what has happened with macroscopic engineering, powerful and generic computational tools will be needed to engineer complex nanosystems, through modeling and simulation. As a result, a major challenge is to develop efficient simulation methods and algorithms.
NANO-D, the INRIA research group I started in January 2008 in Grenoble, France, aims at developing
efficient computational methods for modeling and simulating complex nanosystems, both natural and artificial. In particular, NANO-D develops SAMSON, a software application which gathers all algorithms designed by the group and its collaborators (SAMSON: Software for Adaptive Modeling and Simulation Of Nanosystems).
In this project, I propose to develop a unified theory, and associated algorithms, for adaptive particle simulation. The proposed theory will avoid problems that plague current popular multi-scale or hybrid simulation approaches by simulating a single potential throughout the system, while allowing users to finely trade precision for computational speed.
I believe the full development of the adaptive particle simulation theory will have an important impact on current modeling and simulation practices, and will enable practical design of complex nanosystems on desktop computers, which should significantly boost the emergence of generic nano-engineering."
Summary
"During the twentieth century, the development of macroscopic engineering has been largely stimulated by progress in digital prototyping: cars, planes, boats, etc. are nowadays designed and tested on computers. Digital prototypes have progressively replaced actual ones, and effective computer-aided engineering tools have helped cut costs and reduce production cycles of these macroscopic systems.
The twenty-first century is most likely to see a similar development at the atomic scale. Indeed, the recent years have seen tremendous progress in nanotechnology - in particular in the ability to control matter at the atomic scale. Similar to what has happened with macroscopic engineering, powerful and generic computational tools will be needed to engineer complex nanosystems, through modeling and simulation. As a result, a major challenge is to develop efficient simulation methods and algorithms.
NANO-D, the INRIA research group I started in January 2008 in Grenoble, France, aims at developing
efficient computational methods for modeling and simulating complex nanosystems, both natural and artificial. In particular, NANO-D develops SAMSON, a software application which gathers all algorithms designed by the group and its collaborators (SAMSON: Software for Adaptive Modeling and Simulation Of Nanosystems).
In this project, I propose to develop a unified theory, and associated algorithms, for adaptive particle simulation. The proposed theory will avoid problems that plague current popular multi-scale or hybrid simulation approaches by simulating a single potential throughout the system, while allowing users to finely trade precision for computational speed.
I believe the full development of the adaptive particle simulation theory will have an important impact on current modeling and simulation practices, and will enable practical design of complex nanosystems on desktop computers, which should significantly boost the emergence of generic nano-engineering."
Max ERC Funding
1 476 882 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym ALGAME
Project Algorithms, Games, Mechanisms, and the Price of Anarchy
Researcher (PI) Elias Koutsoupias
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Summary
The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Max ERC Funding
2 461 000 €
Duration
Start date: 2013-04-01, End date: 2019-03-31
Project acronym ALLEGRO
Project Active large-scale learning for visual recognition
Researcher (PI) Cordelia Schmid
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary A massive and ever growing amount of digital image and video content
is available today, on sites such as
Flickr and YouTube, in audiovisual archives such as those of BBC and
INA, and in personal collections. In most cases, it comes with
additional information, such as text, audio or other metadata, that forms a
rather sparse and noisy, yet rich and diverse source of annotation,
ideally suited to emerging weakly supervised and active machine
learning technology. The ALLEGRO project will take visual recognition
to the next level by using this largely untapped source of data to
automatically learn visual models. The main research objective of
our project is the development of new algorithms and computer software
capable of autonomously exploring evolving data collections, selecting
the relevant information, and determining the visual models most
appropriate for different object, scene, and activity categories. An
emphasis will be put on learning visual models from video, a
particularly rich source of information, and on the representation of
human activities, one of today's most challenging problems in computer
vision. Although this project addresses fundamental research
issues, it is expected to result in significant advances in
high-impact applications that range from visual mining of the Web and
automated annotation and organization of family photo and video albums
to large-scale information retrieval in television archives.
Summary
A massive and ever growing amount of digital image and video content
is available today, on sites such as
Flickr and YouTube, in audiovisual archives such as those of BBC and
INA, and in personal collections. In most cases, it comes with
additional information, such as text, audio or other metadata, that forms a
rather sparse and noisy, yet rich and diverse source of annotation,
ideally suited to emerging weakly supervised and active machine
learning technology. The ALLEGRO project will take visual recognition
to the next level by using this largely untapped source of data to
automatically learn visual models. The main research objective of
our project is the development of new algorithms and computer software
capable of autonomously exploring evolving data collections, selecting
the relevant information, and determining the visual models most
appropriate for different object, scene, and activity categories. An
emphasis will be put on learning visual models from video, a
particularly rich source of information, and on the representation of
human activities, one of today's most challenging problems in computer
vision. Although this project addresses fundamental research
issues, it is expected to result in significant advances in
high-impact applications that range from visual mining of the Web and
automated annotation and organization of family photo and video albums
to large-scale information retrieval in television archives.
Max ERC Funding
2 493 322 €
Duration
Start date: 2013-04-01, End date: 2019-03-31
Project acronym AMYLOID
Project Identification and modulation of pathogenic Amyloid beta-peptide species
Researcher (PI) Christian Haass
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Advanced Grant (AdG), LS5, ERC-2012-ADG_20120314
Summary The frequency of Alzheimer's disease (AD) will dramatically increase in the ageing western society during the next decades. Currently, about 18 million people suffer worldwide from AD. Since no cure is available, this devastating disorder represents one of the most challenging socio-economical problems of our future. As onset and progression of AD is triggered by the amyloid cascade, I will put particular attention on amyloid ß-peptide (Aß). The reason for this approach is, that even though 20 years ago the Aß generating processing pathway was identified (Haass et al., Nature 1992a & b), the identity of the Aß species, which initiate the deadly cascade is still unknown. I will first tackle this challenge by investigating if a novel and so far completely overlooked proteolytic processing pathway is involved in the generation of Aß species capable to initiate spreading of pathology and neurotoxicity. I will then search for modulating proteins, which could affect generation of pathological Aß species. This includes a genome-wide screen for modifiers of gamma-secretase, one of the proteases involved in Aß generation as well as a targeted search for RNA binding proteins capable to posttranscriptionally regulate beta- and alpha-secretase. In a disease-crossing approach, RNA binding proteins, which were recently found not only to be deposited in Frontotemporal Lobar Degeneration and Amyotrophic Lateral Sclerosis but also in many AD cases, will be investigated for their potential to modulate Aß aggregation and AD pathology. Modifiers and novel antibodies specifically recognizing neurotoxic Aß assemblies will be validated for their potential not only to prevent amyloid plaque formation, but also spreading of pathology as well as neurotoxicity. In vivo validations include studies in innovative zebrafish models, which allow life imaging of neuronal cell death, as well as the establishment of microPET amyloid imaging for longitudinal studies in individual animals.
Summary
The frequency of Alzheimer's disease (AD) will dramatically increase in the ageing western society during the next decades. Currently, about 18 million people suffer worldwide from AD. Since no cure is available, this devastating disorder represents one of the most challenging socio-economical problems of our future. As onset and progression of AD is triggered by the amyloid cascade, I will put particular attention on amyloid ß-peptide (Aß). The reason for this approach is, that even though 20 years ago the Aß generating processing pathway was identified (Haass et al., Nature 1992a & b), the identity of the Aß species, which initiate the deadly cascade is still unknown. I will first tackle this challenge by investigating if a novel and so far completely overlooked proteolytic processing pathway is involved in the generation of Aß species capable to initiate spreading of pathology and neurotoxicity. I will then search for modulating proteins, which could affect generation of pathological Aß species. This includes a genome-wide screen for modifiers of gamma-secretase, one of the proteases involved in Aß generation as well as a targeted search for RNA binding proteins capable to posttranscriptionally regulate beta- and alpha-secretase. In a disease-crossing approach, RNA binding proteins, which were recently found not only to be deposited in Frontotemporal Lobar Degeneration and Amyotrophic Lateral Sclerosis but also in many AD cases, will be investigated for their potential to modulate Aß aggregation and AD pathology. Modifiers and novel antibodies specifically recognizing neurotoxic Aß assemblies will be validated for their potential not only to prevent amyloid plaque formation, but also spreading of pathology as well as neurotoxicity. In vivo validations include studies in innovative zebrafish models, which allow life imaging of neuronal cell death, as well as the establishment of microPET amyloid imaging for longitudinal studies in individual animals.
Max ERC Funding
2 497 020 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym ASTROLAB
Project Cold Collisions and the Pathways Toward Life in Interstellar Space
Researcher (PI) Holger Kreckel
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Starting Grant (StG), PE9, ERC-2012-StG_20111012
Summary Modern telescopes like Herschel and ALMA open up a new window into molecular astrophysics to investigate a surprisingly rich chemistry that operates even at low densities and low temperatures. Observations with these instruments have the potential of unraveling key questions of astrobiology, like the accumulation of water and pre-biotic organic molecules on (exo)planets from asteroids and comets. Hand-in-hand with the heightened observational activities comes a strong demand for a thorough understanding of the molecular formation mechanisms. The vast majority of interstellar molecules are formed in ion-neutral reactions that remain efficient even at low temperatures. Unfortunately, the unusual nature of these processes under terrestrial conditions makes their laboratory study extremely difficult.
To address these issues, I propose to build a versatile merged beams setup for laboratory studies of ion-neutral collisions at the Cryogenic Storage Ring (CSR), the most ambitious of the next-generation storage devices under development worldwide. With this experimental setup, I will make use of a low-temperature and low-density environment that is ideal to simulate the conditions prevailing in interstellar space. The cryogenic surrounding, in combination with laser-generated ground state atom beams, will allow me to perform precise energy-resolved rate coefficient measurements for reactions between cold molecular ions (like, e.g., H2+, H3+, HCO+, CH2+, CH3+, etc.) and neutral atoms (H, D, C or O) in order to shed light on long-standing problems of astrochemistry and the formation of organic molecules in space.
With the large variability of the collision energy (corresponding to 40-40000 K), I will be able to provide data that are crucial for the interpretation of molecular observations in a variety of objects, ranging from cold molecular clouds to warm layers in protoplanetary disks.
Summary
Modern telescopes like Herschel and ALMA open up a new window into molecular astrophysics to investigate a surprisingly rich chemistry that operates even at low densities and low temperatures. Observations with these instruments have the potential of unraveling key questions of astrobiology, like the accumulation of water and pre-biotic organic molecules on (exo)planets from asteroids and comets. Hand-in-hand with the heightened observational activities comes a strong demand for a thorough understanding of the molecular formation mechanisms. The vast majority of interstellar molecules are formed in ion-neutral reactions that remain efficient even at low temperatures. Unfortunately, the unusual nature of these processes under terrestrial conditions makes their laboratory study extremely difficult.
To address these issues, I propose to build a versatile merged beams setup for laboratory studies of ion-neutral collisions at the Cryogenic Storage Ring (CSR), the most ambitious of the next-generation storage devices under development worldwide. With this experimental setup, I will make use of a low-temperature and low-density environment that is ideal to simulate the conditions prevailing in interstellar space. The cryogenic surrounding, in combination with laser-generated ground state atom beams, will allow me to perform precise energy-resolved rate coefficient measurements for reactions between cold molecular ions (like, e.g., H2+, H3+, HCO+, CH2+, CH3+, etc.) and neutral atoms (H, D, C or O) in order to shed light on long-standing problems of astrochemistry and the formation of organic molecules in space.
With the large variability of the collision energy (corresponding to 40-40000 K), I will be able to provide data that are crucial for the interpretation of molecular observations in a variety of objects, ranging from cold molecular clouds to warm layers in protoplanetary disks.
Max ERC Funding
1 486 800 €
Duration
Start date: 2012-09-01, End date: 2017-11-30
Project acronym AXONSURVIVAL
Project Axon survival: the role of protein synthesis
Researcher (PI) Christine Elizabeth Holt
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Advanced Grant (AdG), LS5, ERC-2012-ADG_20120314
Summary Neurons make long-distance connections with synaptic targets via axons. These axons survive throughout the lifetime of an organism, often many years in mammals, yet how axons are maintained is not fully understood. Recently, we provided in vivo evidence that local mRNA translation in mature axons is required for their maintenance. This new finding, along with in vitro work from other groups, indicates that promoting axonal protein synthesis is a key mechanism by which trophic factors act to prevent axon degeneration. Here we propose a program of research to investigate the importance of ribosomal proteins (RPs) in axon maintenance and degeneration. The rationale for this is fourfold. First, recent genome-wide studies of axonal transcriptomes have revealed that protein synthesis (including RP mRNAs) is the highest functional category in several neuronal types. Second, some RPs have evolved extra-ribosomal functions that include signalling, such as 67LR which acts both as a cell surface receptor for laminin and as a RP. Third, mutations in different RPs in vertebrates cause unexpectedly specific defects, such as the loss of optic axons. Fourth, preliminary results show that RP mRNAs are translated in optic axons in response to trophic factors. Collectively these findings lead us to propose that locally synthesized RPs play a role in axon maintenance through either ribosomal or extra-ribosomal function. To pursue this proposal, we will perform unbiased screens and functional assays using an array of experimental approaches and animal models. By gaining an understanding of how local RP synthesis contributes to axon survival, our studies have the potential to provide novel insights into how components conventionally associated with a housekeeping role (translation) are linked to axon degeneration. Our findings could provide new directions for developing therapeutic tools for neurodegenerative disorders and may have an impact on more diverse areas of biology and disease.
Summary
Neurons make long-distance connections with synaptic targets via axons. These axons survive throughout the lifetime of an organism, often many years in mammals, yet how axons are maintained is not fully understood. Recently, we provided in vivo evidence that local mRNA translation in mature axons is required for their maintenance. This new finding, along with in vitro work from other groups, indicates that promoting axonal protein synthesis is a key mechanism by which trophic factors act to prevent axon degeneration. Here we propose a program of research to investigate the importance of ribosomal proteins (RPs) in axon maintenance and degeneration. The rationale for this is fourfold. First, recent genome-wide studies of axonal transcriptomes have revealed that protein synthesis (including RP mRNAs) is the highest functional category in several neuronal types. Second, some RPs have evolved extra-ribosomal functions that include signalling, such as 67LR which acts both as a cell surface receptor for laminin and as a RP. Third, mutations in different RPs in vertebrates cause unexpectedly specific defects, such as the loss of optic axons. Fourth, preliminary results show that RP mRNAs are translated in optic axons in response to trophic factors. Collectively these findings lead us to propose that locally synthesized RPs play a role in axon maintenance through either ribosomal or extra-ribosomal function. To pursue this proposal, we will perform unbiased screens and functional assays using an array of experimental approaches and animal models. By gaining an understanding of how local RP synthesis contributes to axon survival, our studies have the potential to provide novel insights into how components conventionally associated with a housekeeping role (translation) are linked to axon degeneration. Our findings could provide new directions for developing therapeutic tools for neurodegenerative disorders and may have an impact on more diverse areas of biology and disease.
Max ERC Funding
2 426 573 €
Duration
Start date: 2013-03-01, End date: 2018-09-30
Project acronym BeyondWorstCase
Project Algorithms beyond the Worst Case
Researcher (PI) Heiko Roglin
Host Institution (HI) RHEINISCHE FRIEDRICH-WILHELMS-UNIVERSITAT BONN
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary For many optimization problems that arise in logistics, information retrieval, and other contexts the classical theory of algorithms has lost its grip on reality because it is based on a pessimistic worst-case perspective, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. This does not take into consideration that worst-case inputs are often rather contrived and occur only rarely in practical applications. It led to the situation that for many problems the classical theory is not able to differentiate meaningfully between different algorithms. Even worse, for some important problems it recommends algorithms that perform badly in practice over algorithms that work well in practice only because the artificial worst-case performance of the latter ones is bad.
We will study classic optimization problems (traveling salesperson problem, linear programming, etc.) as well as problems coming from machine learning and information retrieval. All these problems have in common that the practically most successful algorithms have a devastating worst-case performance even though they clearly outperform the theoretically best algorithms.
Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. This project will play a major role in this paradigm shift by developing and exploring novel theoretical approaches (e.g. smoothed analysis) to reconcile theory and practice. A more realistic theory will have a profound impact on the design and analysis of algorithms in the future, and the insights gained in this project will lead to algorithmic tools for large-scale optimization problems that improve on existing ad hoc methods. We will not only work theoretically but also test the applicability of our theoretical considerations in experimental studies.
Summary
For many optimization problems that arise in logistics, information retrieval, and other contexts the classical theory of algorithms has lost its grip on reality because it is based on a pessimistic worst-case perspective, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. This does not take into consideration that worst-case inputs are often rather contrived and occur only rarely in practical applications. It led to the situation that for many problems the classical theory is not able to differentiate meaningfully between different algorithms. Even worse, for some important problems it recommends algorithms that perform badly in practice over algorithms that work well in practice only because the artificial worst-case performance of the latter ones is bad.
We will study classic optimization problems (traveling salesperson problem, linear programming, etc.) as well as problems coming from machine learning and information retrieval. All these problems have in common that the practically most successful algorithms have a devastating worst-case performance even though they clearly outperform the theoretically best algorithms.
Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. This project will play a major role in this paradigm shift by developing and exploring novel theoretical approaches (e.g. smoothed analysis) to reconcile theory and practice. A more realistic theory will have a profound impact on the design and analysis of algorithms in the future, and the insights gained in this project will lead to algorithmic tools for large-scale optimization problems that improve on existing ad hoc methods. We will not only work theoretically but also test the applicability of our theoretical considerations in experimental studies.
Max ERC Funding
1 235 820 €
Duration
Start date: 2012-10-01, End date: 2017-09-30