Project acronym AstroFunc
Project Molecular Studies of Astrocyte Function in Health and Disease
Researcher (PI) Matthew Guy Holt
Host Institution (HI) VIB
Call Details Starting Grant (StG), LS5, ERC-2011-StG_20101109
Summary Brain consists of two basic cell types – neurons and glia. However, the study of glia in brain function has traditionally been neglected in favor of their more “illustrious” counter-parts – neurons that are classed as the computational units of the brain. Glia have usually been classed as “brain glue” - a supportive matrix on which neurons grow and function. However, recent evidence suggests that glia are more than passive “glue” and actually modulate neuronal function. This has lead to the proposal of a “tripartite synapse”, which recognizes pre- and postsynaptic neuronal elements and glia as a unit.
However, what is still lacking is rudimentary information on how these cells actually function in situ. Here we propose taking a “bottom-up” approach, by identifying the molecules (and interactions) that control glial function in situ. This is complicated by the fact that glia show profound changes when placed into culture. To circumvent this, we will use recently developed cell sorting techniques, to rapidly isolate genetically marked glial cells from brain – which can then be analyzed using advanced biochemical and physiological techniques. The long-term aim is to identify proteins that can be “tagged” using transgenic technologies to allow protein function to be studied in real-time in vivo, using sophisticated imaging techniques. Given the number of proteins that may be identified we envisage developing new methods of generating transgenic animals that provide an attractive alternative to current “state-of-the art” technology.
The importance of studying glial function is given by the fact that every major brain pathology shows reactive gliosis. In the time it takes to read this abstract, 5 people in the EU will have suffered a stroke – not to mention those who suffer other forms of neurotrauma. Thus, understanding glial function is not only critical to understanding normal brain function, but also for relieving the burden of severe neurological injury and disease
Summary
Brain consists of two basic cell types – neurons and glia. However, the study of glia in brain function has traditionally been neglected in favor of their more “illustrious” counter-parts – neurons that are classed as the computational units of the brain. Glia have usually been classed as “brain glue” - a supportive matrix on which neurons grow and function. However, recent evidence suggests that glia are more than passive “glue” and actually modulate neuronal function. This has lead to the proposal of a “tripartite synapse”, which recognizes pre- and postsynaptic neuronal elements and glia as a unit.
However, what is still lacking is rudimentary information on how these cells actually function in situ. Here we propose taking a “bottom-up” approach, by identifying the molecules (and interactions) that control glial function in situ. This is complicated by the fact that glia show profound changes when placed into culture. To circumvent this, we will use recently developed cell sorting techniques, to rapidly isolate genetically marked glial cells from brain – which can then be analyzed using advanced biochemical and physiological techniques. The long-term aim is to identify proteins that can be “tagged” using transgenic technologies to allow protein function to be studied in real-time in vivo, using sophisticated imaging techniques. Given the number of proteins that may be identified we envisage developing new methods of generating transgenic animals that provide an attractive alternative to current “state-of-the art” technology.
The importance of studying glial function is given by the fact that every major brain pathology shows reactive gliosis. In the time it takes to read this abstract, 5 people in the EU will have suffered a stroke – not to mention those who suffer other forms of neurotrauma. Thus, understanding glial function is not only critical to understanding normal brain function, but also for relieving the burden of severe neurological injury and disease
Max ERC Funding
1 490 168 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym FLUOROCODE
Project FLUOROCODE: a super-resolution optical map of DNA
Researcher (PI) Johan M. V. Hofkens
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Advanced Grant (AdG), PE4, ERC-2011-ADG_20110209
Summary "There has been an immense investment of time, effort and resources in the development of the technologies that enable DNA sequencing in the past 10 years. Despite the significant advances made, all of the current genomic sequencing technologies suffer from two important shortcomings. Firstly, sample preparation is time-consuming and expensive, and requiring a full day for sample preparation for next-generation sequencing experiments. Secondly, sequence information is delivered in short fragments, which are then assembled into a complete genome. Assembly is time-consuming and often results in a highly fragmented genomic sequence and the loss of important information on large-scale structural variation within the genome.
We recently developed a super-resolution DNA mapping technology, which allows us to uniquely study genetic-scale features in genomic length DNA molecules. Labelling the DNA with fluorescent molecules at specific sequences and using high-resolution fluorescence microscopy enabled us to produce a map of a genomic DNA sequence with unparalleled resolution, the so called FLUOROCODE. In this project we aim to extend our methodology to map longer DNA molecules and to include a multi-colour version of the FLUOROCODE that will allow us to read genomic DNA molecules like a barcode and probe DNA methylation status. The sample preparation, DNA labelling and deposition for imaging will be integrated to allow rapid mapping of DNA molecules. At the same time nanopores will be explored as a route to high-throughput DNA mapping.
FLUOROCODE will develop technology that aims to complement the information derived from current DNA sequencing platforms. The technology developed by FLUOROCODE will enable DNA mapping at unprecedented speed and for a fraction of the cost of a typical DNA sequencing project. We aniticipate that our method will find applications in the rapid identification of pathogens and in producing genomic scaffolds to improve genome sequence assembly."
Summary
"There has been an immense investment of time, effort and resources in the development of the technologies that enable DNA sequencing in the past 10 years. Despite the significant advances made, all of the current genomic sequencing technologies suffer from two important shortcomings. Firstly, sample preparation is time-consuming and expensive, and requiring a full day for sample preparation for next-generation sequencing experiments. Secondly, sequence information is delivered in short fragments, which are then assembled into a complete genome. Assembly is time-consuming and often results in a highly fragmented genomic sequence and the loss of important information on large-scale structural variation within the genome.
We recently developed a super-resolution DNA mapping technology, which allows us to uniquely study genetic-scale features in genomic length DNA molecules. Labelling the DNA with fluorescent molecules at specific sequences and using high-resolution fluorescence microscopy enabled us to produce a map of a genomic DNA sequence with unparalleled resolution, the so called FLUOROCODE. In this project we aim to extend our methodology to map longer DNA molecules and to include a multi-colour version of the FLUOROCODE that will allow us to read genomic DNA molecules like a barcode and probe DNA methylation status. The sample preparation, DNA labelling and deposition for imaging will be integrated to allow rapid mapping of DNA molecules. At the same time nanopores will be explored as a route to high-throughput DNA mapping.
FLUOROCODE will develop technology that aims to complement the information derived from current DNA sequencing platforms. The technology developed by FLUOROCODE will enable DNA mapping at unprecedented speed and for a fraction of the cost of a typical DNA sequencing project. We aniticipate that our method will find applications in the rapid identification of pathogens and in producing genomic scaffolds to improve genome sequence assembly."
Max ERC Funding
2 423 160 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym KONGOKING
Project Political centralization, economic integration and language evolution in Central Africa: An interdisciplinary approach to the early history of the Kongo kingdom
Researcher (PI) Koen André Georges Bostoen
Host Institution (HI) UNIVERSITEIT GENT
Call Details Starting Grant (StG), SH6, ERC-2011-StG_20101124
Summary The magnificent Kongo kingdom, which arose in the Atlantic Coast region of Equatorial Africa, is a famous emblem of Africa’s past. It is an important cultural landmark for Africans and the African Diaspora. Thanks to its early introduction to literacy and involvement in the Trans- Atlantic trade, the history of this part of sub-Saharan Africa from 1500 onwards is better known than most other parts. Nevertheless, very little is known about the origins and earlier history of the kingdom. Hence, this grant application proposes an interdisciplinary approach to this question. Archaeology and historical linguistics, two key disciplines for early history reconstruction in Africa, will play the most prominent role in this approach. Paradoxically, if the wider region of the Kongo kingdom is one of the best documented areas of Central Africa from a historical and ethnographic point of view, it is virtually unknown archaeologically. The proposed research team will therefore undertake pioneer excavations in several capital sites of the old kingdom. Similarly, no comprehensive historical study has covered the languages of the Kongo and closely affiliated kingdoms. Nonetheless, the earliest documents with Bantu data, going back to the early 16th century, originate from this region. The proposed research team will therefore undertake a historical-comparative study of the Kikongo dialect cluster and surrounding language groups, such as Kimbundu, Teke and Punu-Shira, systematically comparing current-day data with data from the old documents. Special attention will be given to cultural vocabulary related to politics, religion, social organization, trade and crafts, which in conjunction with the archaeological discoveries, will shed new light on th
Summary
The magnificent Kongo kingdom, which arose in the Atlantic Coast region of Equatorial Africa, is a famous emblem of Africa’s past. It is an important cultural landmark for Africans and the African Diaspora. Thanks to its early introduction to literacy and involvement in the Trans- Atlantic trade, the history of this part of sub-Saharan Africa from 1500 onwards is better known than most other parts. Nevertheless, very little is known about the origins and earlier history of the kingdom. Hence, this grant application proposes an interdisciplinary approach to this question. Archaeology and historical linguistics, two key disciplines for early history reconstruction in Africa, will play the most prominent role in this approach. Paradoxically, if the wider region of the Kongo kingdom is one of the best documented areas of Central Africa from a historical and ethnographic point of view, it is virtually unknown archaeologically. The proposed research team will therefore undertake pioneer excavations in several capital sites of the old kingdom. Similarly, no comprehensive historical study has covered the languages of the Kongo and closely affiliated kingdoms. Nonetheless, the earliest documents with Bantu data, going back to the early 16th century, originate from this region. The proposed research team will therefore undertake a historical-comparative study of the Kikongo dialect cluster and surrounding language groups, such as Kimbundu, Teke and Punu-Shira, systematically comparing current-day data with data from the old documents. Special attention will be given to cultural vocabulary related to politics, religion, social organization, trade and crafts, which in conjunction with the archaeological discoveries, will shed new light on th
Max ERC Funding
1 400 760 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym microCODE
Project Microfluidic Combinatorial On Demand Systems: a Platform for High-Throughput Screening in Chemistry and Biotechnology
Researcher (PI) Piotr Garstecki
Host Institution (HI) INSTYTUT CHEMII FIZYCZNEJ POLSKIEJ AKADEMII NAUK
Call Details Starting Grant (StG), PE4, ERC-2011-StG_20101014
Summary This proposal addresses an important opportunity in the rapidly developing art of microfluidics. On one hand vast expertise is available on automation of single phase flows via microvalves or electrokinetics and on flow of drops on planar electrodes. These systems are perfectly suited for a range of applications but are inherently inefficient in handling massively large numbers of processes due to correspondingly large number of input/output controls that at best scales logarithmically in the number of processes. On the other hand conducting reactions in thousands micro droplets embodies many of the most acclaimed promises of microfluidics – ultra-miniaturisation, speed, rapid mixing and extensive control of physical conditions. Demonstrations of incubation of cells, in-vitro translation and directed evolution confirm that these techniques can reduce the cost and time of existing processes by orders of magnitude. Droplet microfluidics is at the moment, however, almost (except sorting) completely passive.
We recently demonstrated the use of external valves to automate formation and motion of droplets on simple disposable chips and screening up to 10000 compositions per hour. We propose to develop externally controlled programmable modules for i) multiplexed, on-demand generation of multiple emulsions, ii) aspiration of libraries of samples and multiplexing linear libraries into full cross matrices, iii) splitting drops into two, few and large numbers (e.g. 10000) drops, iv) optical monitoring of presence and content of droplets, v) counting cells inside the drops, vi) circulating drops, vii) titration, viii) holding paramagnetic beads in drops. Our design rules will allow to integrate these modules into externally controlled systems for research on i) combinatorial synthesis, ii) material science, iii) role of noise in metabolic networks, iv) evolution of bacteria, v) inexpensive multiplexed diagnostics systems, including cytometry, PCR and ELISA assays in drops.
Summary
This proposal addresses an important opportunity in the rapidly developing art of microfluidics. On one hand vast expertise is available on automation of single phase flows via microvalves or electrokinetics and on flow of drops on planar electrodes. These systems are perfectly suited for a range of applications but are inherently inefficient in handling massively large numbers of processes due to correspondingly large number of input/output controls that at best scales logarithmically in the number of processes. On the other hand conducting reactions in thousands micro droplets embodies many of the most acclaimed promises of microfluidics – ultra-miniaturisation, speed, rapid mixing and extensive control of physical conditions. Demonstrations of incubation of cells, in-vitro translation and directed evolution confirm that these techniques can reduce the cost and time of existing processes by orders of magnitude. Droplet microfluidics is at the moment, however, almost (except sorting) completely passive.
We recently demonstrated the use of external valves to automate formation and motion of droplets on simple disposable chips and screening up to 10000 compositions per hour. We propose to develop externally controlled programmable modules for i) multiplexed, on-demand generation of multiple emulsions, ii) aspiration of libraries of samples and multiplexing linear libraries into full cross matrices, iii) splitting drops into two, few and large numbers (e.g. 10000) drops, iv) optical monitoring of presence and content of droplets, v) counting cells inside the drops, vi) circulating drops, vii) titration, viii) holding paramagnetic beads in drops. Our design rules will allow to integrate these modules into externally controlled systems for research on i) combinatorial synthesis, ii) material science, iii) role of noise in metabolic networks, iv) evolution of bacteria, v) inexpensive multiplexed diagnostics systems, including cytometry, PCR and ELISA assays in drops.
Max ERC Funding
1 749 600 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym MODES
Project Modal analysis of atmospheric balance, predictability and climate
Researcher (PI) Nedjeljka Zagar
Host Institution (HI) UNIVERZA V LJUBLJANI
Call Details Starting Grant (StG), PE10, ERC-2011-StG_20101014
Summary Despite large progress in modelling of atmospheric processes and computing capabilities and concentrated efforts to increase complexity of the atmospheric models, the assessment of accuracy of natural atmospheric climate variability, its predictability and interaction with anthropogenic influences is far from well understood. This project aims to advance scientific understanding of dynamical properties of the atmosphere and climate systems over many spatial and temporal scales.
It is proposed to study atmospheric balance and predictability in terms of the energy percentage which is associated with various types of motions, balanced or Rossby-type of motions and unbalanced or inertio-gravity motions. This representation of the atmosphere is called the normal-mode function representation and it is a heart of methodology proposed in this project.
The projects is built on theoretical foundation set in 1970s at the National Center for Atmospheric Research in USA and with the support of original developers it will apply normal-mode function representation tool to issues for which it could not have been reliably applied earlier. The project relies on accomplishments of the proposal’s PI in weather and data assimilation modeling which this project will extend to new research areas.
The project will quantify balance in analysis datasets and ensemble forecasting systems and use the results as a starting point for climate model assessment for their ability to represent the present climate and possible changes of balance in model simulations of future climate scenarios. Results will allow dynamical classification of climate models based on their balance properties. Predictability issues will be studied by comparing temporal variability of balance in the forecasts in terms of various spatial scales. An important project outcome will be a free-access, user-friendly tool for carrying out a physically-based analysis of weather and climate model outputs.
Summary
Despite large progress in modelling of atmospheric processes and computing capabilities and concentrated efforts to increase complexity of the atmospheric models, the assessment of accuracy of natural atmospheric climate variability, its predictability and interaction with anthropogenic influences is far from well understood. This project aims to advance scientific understanding of dynamical properties of the atmosphere and climate systems over many spatial and temporal scales.
It is proposed to study atmospheric balance and predictability in terms of the energy percentage which is associated with various types of motions, balanced or Rossby-type of motions and unbalanced or inertio-gravity motions. This representation of the atmosphere is called the normal-mode function representation and it is a heart of methodology proposed in this project.
The projects is built on theoretical foundation set in 1970s at the National Center for Atmospheric Research in USA and with the support of original developers it will apply normal-mode function representation tool to issues for which it could not have been reliably applied earlier. The project relies on accomplishments of the proposal’s PI in weather and data assimilation modeling which this project will extend to new research areas.
The project will quantify balance in analysis datasets and ensemble forecasting systems and use the results as a starting point for climate model assessment for their ability to represent the present climate and possible changes of balance in model simulations of future climate scenarios. Results will allow dynamical classification of climate models based on their balance properties. Predictability issues will be studied by comparing temporal variability of balance in the forecasts in terms of various spatial scales. An important project outcome will be a free-access, user-friendly tool for carrying out a physically-based analysis of weather and climate model outputs.
Max ERC Funding
495 482 €
Duration
Start date: 2011-12-01, End date: 2016-11-30