Project acronym 4D-EEG
Project 4D-EEG: A new tool to investigate the spatial and temporal activity patterns in the brain
Researcher (PI) Franciscus C.T. Van Der Helm
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Advanced Grant (AdG), PE7, ERC-2011-ADG_20110209
Summary Our first goal is to develop a new tool to determine brain activity with a high temporal (< 1 msec) and spatial (about 2 mm) resolution with the focus on motor control. High density EEG (up to 256 electrodes) will be used for EEG source localization. Advanced force-controlled robot manipulators will be used to impose continuous force perturbations to the joints. Advanced closed-loop system identification algorithms will identify the dynamic EEG response of multiple brain areas to the perturbation, leading to a functional interpretation of EEG. The propagation of the signal in time and 3D space through the cortex can be monitored: 4D-EEG. Preliminary experiments with EEG localization have shown that the continuous force perturbations resulted in a better signal-to-noise ratio and coherence than the current method using transient perturbations..
4D-EEG will be a direct measure of the neural activity in the brain with an excellent temporal response and easy to use in combination with motor control tasks. The new 4D-EEG method is expected to provide a breakthrough in comparison to functional MRI (fMRI) when elucidating the meaning of cortical map plasticity in motor learning.
Our second goal is to generate and validate new hypotheses about the longitudinal relationship between motor learning and cortical map plasticity by clinically using 4D-EEG in an intensive, repeated measurement design in patients suffering from a stroke. The application of 4D-EEG combined with haptic robots will allow us to discover how dynamics in cortical map plasticity are related with upper limb recovery after stroke in terms of neural repair and using behavioral compensation strategies while performing a meaningful motor tasks.. The non-invasive 4D-EEG technique combined with haptic robots will open the window about what and how patients (re)learn when showing motor recovery after stroke in order to allow us to develop more effective patient-tailored therapies in neuro-rehabilitation.
Summary
Our first goal is to develop a new tool to determine brain activity with a high temporal (< 1 msec) and spatial (about 2 mm) resolution with the focus on motor control. High density EEG (up to 256 electrodes) will be used for EEG source localization. Advanced force-controlled robot manipulators will be used to impose continuous force perturbations to the joints. Advanced closed-loop system identification algorithms will identify the dynamic EEG response of multiple brain areas to the perturbation, leading to a functional interpretation of EEG. The propagation of the signal in time and 3D space through the cortex can be monitored: 4D-EEG. Preliminary experiments with EEG localization have shown that the continuous force perturbations resulted in a better signal-to-noise ratio and coherence than the current method using transient perturbations..
4D-EEG will be a direct measure of the neural activity in the brain with an excellent temporal response and easy to use in combination with motor control tasks. The new 4D-EEG method is expected to provide a breakthrough in comparison to functional MRI (fMRI) when elucidating the meaning of cortical map plasticity in motor learning.
Our second goal is to generate and validate new hypotheses about the longitudinal relationship between motor learning and cortical map plasticity by clinically using 4D-EEG in an intensive, repeated measurement design in patients suffering from a stroke. The application of 4D-EEG combined with haptic robots will allow us to discover how dynamics in cortical map plasticity are related with upper limb recovery after stroke in terms of neural repair and using behavioral compensation strategies while performing a meaningful motor tasks.. The non-invasive 4D-EEG technique combined with haptic robots will open the window about what and how patients (re)learn when showing motor recovery after stroke in order to allow us to develop more effective patient-tailored therapies in neuro-rehabilitation.
Max ERC Funding
3 477 202 €
Duration
Start date: 2012-06-01, End date: 2017-05-31
Project acronym AAATSI
Project Advanced Antenna Architecture for THZ Sensing Instruments
Researcher (PI) Andrea Neto
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Starting Grant (StG), PE7, ERC-2011-StG_20101014
Summary The Tera-Hertz portion of the spectrum presents unique potentials for advanced applications. Currently the THz spectrum is revealing the mechanisms at the origin of our universe and provides the means to monitor the health of our planet via satellite based sensing of critical gases. Potentially time domain sensing of the THz spectrum will be the ideal tool for a vast variety of medical and security applications.
Presently, systems in the THz regime are extremely expensive and consequently the THz spectrum is still the domain of only niche (expensive) scientific applications. The main problems are the lack of power and sensitivity. The wide unused THz spectral bandwidth is, herself, the only widely available resource that in the future can compensate for these problems. But, so far, when scientists try to really use the bandwidth, they run into an insurmountable physical limit: antenna dispersion. Antenna dispersion modifies the signal’s spectrum in a wavelength dependent manner in all types of radiation, but is particularly deleterious to THz signals because the spectrum is too wide and with foreseeable technology it cannot be digitized.
The goal of this proposal is to introduce break-through antenna technology that will eliminate the dispersion bottle neck and revolutionize Time Domain sensing and Spectroscopic Space Science. Achieving these goals the project will pole vault THz imaging technology into the 21-th century and develop critically important enabling technologies which will satisfy the electrical engineering needs of the next 30 years and in the long run will enable multi Tera-bit wireless communications.
In order to achieve these goals, I will first build upon two major breakthrough radiation mechanisms that I pioneered: Leaky Lenses and Connected Arrays. Eventually, ultra wide band imaging arrays constituted by thousands of components will be designed on the bases of the new theoretical findings and demonstrated.
Summary
The Tera-Hertz portion of the spectrum presents unique potentials for advanced applications. Currently the THz spectrum is revealing the mechanisms at the origin of our universe and provides the means to monitor the health of our planet via satellite based sensing of critical gases. Potentially time domain sensing of the THz spectrum will be the ideal tool for a vast variety of medical and security applications.
Presently, systems in the THz regime are extremely expensive and consequently the THz spectrum is still the domain of only niche (expensive) scientific applications. The main problems are the lack of power and sensitivity. The wide unused THz spectral bandwidth is, herself, the only widely available resource that in the future can compensate for these problems. But, so far, when scientists try to really use the bandwidth, they run into an insurmountable physical limit: antenna dispersion. Antenna dispersion modifies the signal’s spectrum in a wavelength dependent manner in all types of radiation, but is particularly deleterious to THz signals because the spectrum is too wide and with foreseeable technology it cannot be digitized.
The goal of this proposal is to introduce break-through antenna technology that will eliminate the dispersion bottle neck and revolutionize Time Domain sensing and Spectroscopic Space Science. Achieving these goals the project will pole vault THz imaging technology into the 21-th century and develop critically important enabling technologies which will satisfy the electrical engineering needs of the next 30 years and in the long run will enable multi Tera-bit wireless communications.
In order to achieve these goals, I will first build upon two major breakthrough radiation mechanisms that I pioneered: Leaky Lenses and Connected Arrays. Eventually, ultra wide band imaging arrays constituted by thousands of components will be designed on the bases of the new theoretical findings and demonstrated.
Max ERC Funding
1 499 487 €
Duration
Start date: 2011-11-01, End date: 2017-10-31
Project acronym ADULT
Project Analysis of the Dark Universe through Lensing Tomography
Researcher (PI) Hendrik Hoekstra
Host Institution (HI) UNIVERSITEIT LEIDEN
Call Details Starting Grant (StG), PE9, ERC-2011-StG_20101014
Summary The discoveries that the expansion of the universe is accelerating due to an unknown “dark energy”
and that most of the matter is invisible, highlight our lack of understanding of the major constituents
of the universe. These surprising findings set the stage for research in cosmology at the start of the
21st century. The objective of this proposal is to advance observational constraints to a level where we can distinguish between physical mechanisms that aim to explain the properties of dark energy and the observed distribution of dark matter throughout the universe. We use a relatively new technique called weak gravitational lensing: the accurate measurement of correlations in the orientations of distant galaxies enables us to map the dark matter distribution directly and to extract the cosmological information that is encoded by the large-scale structure.
To study the dark universe we will analyse data from a new state-of-the-art imaging survey: the Kilo-
Degree Survey (KiDS) will cover 1500 square degrees in 9 filters. The combination of its large survey
area and the availability of exquisite photometric redshifts for the sources makes KiDS the first
project that can place interesting constraints on the dark energy equation-of-state using lensing data
alone. Combined with complementary results from Planck, our measurements will provide one of the
best views of the dark side of the universe before much larger space-based projects commence.
To reach the desired accuracy we need to carefully measure the shapes of distant background galaxies. We also need to account for any intrinsic alignments that arise due to tidal interactions, rather than through lensing. Reducing these observational and physical biases to negligible levels is a necessarystep to ensure the success of KiDS and an important part of our preparation for more challenging projects such as the European-led space mission Euclid.
Summary
The discoveries that the expansion of the universe is accelerating due to an unknown “dark energy”
and that most of the matter is invisible, highlight our lack of understanding of the major constituents
of the universe. These surprising findings set the stage for research in cosmology at the start of the
21st century. The objective of this proposal is to advance observational constraints to a level where we can distinguish between physical mechanisms that aim to explain the properties of dark energy and the observed distribution of dark matter throughout the universe. We use a relatively new technique called weak gravitational lensing: the accurate measurement of correlations in the orientations of distant galaxies enables us to map the dark matter distribution directly and to extract the cosmological information that is encoded by the large-scale structure.
To study the dark universe we will analyse data from a new state-of-the-art imaging survey: the Kilo-
Degree Survey (KiDS) will cover 1500 square degrees in 9 filters. The combination of its large survey
area and the availability of exquisite photometric redshifts for the sources makes KiDS the first
project that can place interesting constraints on the dark energy equation-of-state using lensing data
alone. Combined with complementary results from Planck, our measurements will provide one of the
best views of the dark side of the universe before much larger space-based projects commence.
To reach the desired accuracy we need to carefully measure the shapes of distant background galaxies. We also need to account for any intrinsic alignments that arise due to tidal interactions, rather than through lensing. Reducing these observational and physical biases to negligible levels is a necessarystep to ensure the success of KiDS and an important part of our preparation for more challenging projects such as the European-led space mission Euclid.
Max ERC Funding
1 316 880 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym AGGLONANOCOAT
Project The interplay between agglomeration and coating of nanoparticles in the gas phase
Researcher (PI) Jan Rudolf Van Ommen
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Starting Grant (StG), PE8, ERC-2011-StG_20101014
Summary This proposal aims to develop a generic synthesis approach for core-shell nanoparticles by unravelling the relevant mechanisms. Core-shell nanoparticles have high potential in heterogeneous catalysis, energy storage, and medical applications. However, on a fundamental level there is currently a poor understanding of how to produce such nanostructured particles in a controllable and scalable manner.
The main barriers to achieving this goal are understanding how nanoparticles agglomerate to loose dynamic clusters and controlling the agglomeration process in gas flows during coating, such that uniform coatings can be made. This is very challenging because of the two-way coupling between agglomeration and coating. During the coating we change the particle surfaces and thus the way the particles stick together. Correspondingly, the stickiness of particles determines how easy reactants can reach the surface.
Innovatively the project will be the first systematic study into this multi-scale phenomenon with investigations at all relevant length scales. Current synthesis approaches – mostly carried out in the liquid phase – are typically developed case by case. I will coat nanoparticles in the gas phase with atomic layer deposition (ALD): a technique from the semi-conductor industry that can deposit a wide range of materials. ALD applied to flat substrates offers excellent control over layer thickness. I will investigate the modification of single particle surfaces, particle-particle interaction, the structure of agglomerates, and the flow behaviour of large number of agglomerates. To this end, I will apply a multidisciplinary approach, combining disciplines as physical chemistry, fluid dynamics, and reaction engineering.
Summary
This proposal aims to develop a generic synthesis approach for core-shell nanoparticles by unravelling the relevant mechanisms. Core-shell nanoparticles have high potential in heterogeneous catalysis, energy storage, and medical applications. However, on a fundamental level there is currently a poor understanding of how to produce such nanostructured particles in a controllable and scalable manner.
The main barriers to achieving this goal are understanding how nanoparticles agglomerate to loose dynamic clusters and controlling the agglomeration process in gas flows during coating, such that uniform coatings can be made. This is very challenging because of the two-way coupling between agglomeration and coating. During the coating we change the particle surfaces and thus the way the particles stick together. Correspondingly, the stickiness of particles determines how easy reactants can reach the surface.
Innovatively the project will be the first systematic study into this multi-scale phenomenon with investigations at all relevant length scales. Current synthesis approaches – mostly carried out in the liquid phase – are typically developed case by case. I will coat nanoparticles in the gas phase with atomic layer deposition (ALD): a technique from the semi-conductor industry that can deposit a wide range of materials. ALD applied to flat substrates offers excellent control over layer thickness. I will investigate the modification of single particle surfaces, particle-particle interaction, the structure of agglomerates, and the flow behaviour of large number of agglomerates. To this end, I will apply a multidisciplinary approach, combining disciplines as physical chemistry, fluid dynamics, and reaction engineering.
Max ERC Funding
1 409 952 €
Duration
Start date: 2011-12-01, End date: 2016-11-30
Project acronym ALPROS
Project Artificial Life-like Processive Systems
Researcher (PI) Roeland Johannes Maria Nolte
Host Institution (HI) STICHTING KATHOLIEKE UNIVERSITEIT
Call Details Advanced Grant (AdG), PE5, ERC-2011-ADG_20110209
Summary Toroidal processive enzymes (e.g. enzymes/proteins that are able to thread onto biopolymers and to perform stepwise reactions along the polymer chain) are among the most fascinating tools involved in the clockwork machinery of life. Processive catalysis is ubiquitous in Nature, viz. DNA polymerases, endo- and exo-nucleases and; it plays a crucial role in numerous events of the cell’s life, including most of the replication, transmission, and expression and repair processes of the genetic information. In the case of DNA polymerases the protein catalyst encircles the DNA and whilst moving along it, make copies of high fidelity. Although numerous works have been reported in relation with the synthesis of natural enzymes' analogues, very few efforts have been paid in comparison to mimic these processive properties. It is the goal of this proposal to rectify this oversight and unravel the essential components of Nature’s polymer catalysts. The individual projects are designed to specifically target the essential aspects of processive catalysis, i.e. rate of motion, rate of catalysis, and transfer of information. One project is aimed at extending the research into a processive catalytic system that is more suitable for industrial application. Two projects involve more farsighted studies and are designed to push the research way beyond the current boundaries into the area of Turing machines and bio-rotaxane catalysts which can modify DNA in a non-natural process. The vision of this proposal is to open up the field of ‘processive catalysis’ and invigorate the next generation of chemists to develop information transfer and toroidal processive catalysts. The construction of synthetic analogues of processive enzymes could open a gate toward a large range of applications, ranging from intelligent tailoring of polymers to information storage and processing.
Summary
Toroidal processive enzymes (e.g. enzymes/proteins that are able to thread onto biopolymers and to perform stepwise reactions along the polymer chain) are among the most fascinating tools involved in the clockwork machinery of life. Processive catalysis is ubiquitous in Nature, viz. DNA polymerases, endo- and exo-nucleases and; it plays a crucial role in numerous events of the cell’s life, including most of the replication, transmission, and expression and repair processes of the genetic information. In the case of DNA polymerases the protein catalyst encircles the DNA and whilst moving along it, make copies of high fidelity. Although numerous works have been reported in relation with the synthesis of natural enzymes' analogues, very few efforts have been paid in comparison to mimic these processive properties. It is the goal of this proposal to rectify this oversight and unravel the essential components of Nature’s polymer catalysts. The individual projects are designed to specifically target the essential aspects of processive catalysis, i.e. rate of motion, rate of catalysis, and transfer of information. One project is aimed at extending the research into a processive catalytic system that is more suitable for industrial application. Two projects involve more farsighted studies and are designed to push the research way beyond the current boundaries into the area of Turing machines and bio-rotaxane catalysts which can modify DNA in a non-natural process. The vision of this proposal is to open up the field of ‘processive catalysis’ and invigorate the next generation of chemists to develop information transfer and toroidal processive catalysts. The construction of synthetic analogues of processive enzymes could open a gate toward a large range of applications, ranging from intelligent tailoring of polymers to information storage and processing.
Max ERC Funding
1 603 699 €
Duration
Start date: 2012-02-01, End date: 2017-01-31
Project acronym BRAINSIGNALS
Project Optical dissection of circuits underlying fast cholinergic signalling during cognitive behaviour
Researcher (PI) Huibert Mansvelder
Host Institution (HI) STICHTING VU
Call Details Starting Grant (StG), LS5, ERC-2011-StG_20101109
Summary Our ability to think, to memorize and focus our thoughts depends on acetylcholine signaling in the brain. The loss of cholinergic signalling in for instance Alzheimer’s disease strongly compromises these cognitive abilities. The traditional view on the role of cholinergic input to the neocortex is that slowly changing levels of extracellular acetylcholine (ACh) mediate different arousal states. This view has been challenged by recent studies demonstrating that rapid phasic changes in ACh levels at the scale of seconds are correlated with focus of attention, suggesting that these signals may mediate defined cognitive operations. Despite a wealth of anatomical data on the organization of the cholinergic system, very little understanding exists on its functional organization. How the relatively sparse input of cholinergic transmission in the prefrontal cortex elicits such a profound and specific control over attention is unknown. The main objective of this proposal is to develop a causal understanding of how cellular mechanisms of fast acetylcholine signalling are orchestrated during cognitive behaviour.
In a series of studies, I have identified several synaptic and cellular mechanisms by which the cholinergic system can alter neuronal circuitry function, both in cortical and subcortical areas. I have used a combination of behavioral, physiological and genetic methods in which I manipulated cholinergic receptor functionality in prefrontal cortex in a subunit specific manner and found that ACh receptors in the prefrontal cortex control attention performance. Recent advances in optogenetic and electrochemical methods now allow to rapidly manipulate and measure acetylcholine levels in freely moving, behaving animals. Using these techniques, I aim to uncover which cholinergic neurons are involved in fast cholinergic signaling during cognition and uncover the underlying neuronal mechanisms that alter prefrontal cortical network function.
Summary
Our ability to think, to memorize and focus our thoughts depends on acetylcholine signaling in the brain. The loss of cholinergic signalling in for instance Alzheimer’s disease strongly compromises these cognitive abilities. The traditional view on the role of cholinergic input to the neocortex is that slowly changing levels of extracellular acetylcholine (ACh) mediate different arousal states. This view has been challenged by recent studies demonstrating that rapid phasic changes in ACh levels at the scale of seconds are correlated with focus of attention, suggesting that these signals may mediate defined cognitive operations. Despite a wealth of anatomical data on the organization of the cholinergic system, very little understanding exists on its functional organization. How the relatively sparse input of cholinergic transmission in the prefrontal cortex elicits such a profound and specific control over attention is unknown. The main objective of this proposal is to develop a causal understanding of how cellular mechanisms of fast acetylcholine signalling are orchestrated during cognitive behaviour.
In a series of studies, I have identified several synaptic and cellular mechanisms by which the cholinergic system can alter neuronal circuitry function, both in cortical and subcortical areas. I have used a combination of behavioral, physiological and genetic methods in which I manipulated cholinergic receptor functionality in prefrontal cortex in a subunit specific manner and found that ACh receptors in the prefrontal cortex control attention performance. Recent advances in optogenetic and electrochemical methods now allow to rapidly manipulate and measure acetylcholine levels in freely moving, behaving animals. Using these techniques, I aim to uncover which cholinergic neurons are involved in fast cholinergic signaling during cognition and uncover the underlying neuronal mechanisms that alter prefrontal cortical network function.
Max ERC Funding
1 499 242 €
Duration
Start date: 2011-11-01, End date: 2016-10-31
Project acronym BRiCPT
Project Basic Research in Cryptographic Protocol Theory
Researcher (PI) Jesper Buus Nielsen
Host Institution (HI) AARHUS UNIVERSITET
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary In cryptographic protocol theory, we consider a situation where a number of entities want to solve some problem over a computer network. Each entity has some secret data it does not want the other entities to learn, yet, they all want to learn something about the common set of data. In an electronic election, they want to know the number of yes-votes without revealing who voted what. For instance, in an electronic auction, they want to find the winner without leaking the bids of the losers.
A main focus of the project is to develop new techniques for solving such protocol problems. We are in particular interested in techniques which can automatically construct a protocol solving a problem given only a description of what the problem is. My focus will be theoretical basic research, but I believe that advancing the theory of secure protocol compilers will have an immense impact on the practice of developing secure protocols for practice.
When one develops complex protocols, it is important to be able to verify their correctness before they are deployed, in particular so, when the purpose of the protocols is to protect information. If and when an error is found and corrected, the sensitive data will possibly already be compromised. Therefore, cryptographic protocol theory develops models of what it means for a protocol to be secure, and techniques for analyzing whether a given protocol is secure or not.
A main focuses of the project is to develop better security models, as existing security models either suffer from the problem that it is possible to prove some protocols secure which are not secure in practice, or they suffer from the problem that it is impossible to prove security of some protocol which are believed to be secure in practice. My focus will again be on theoretical basic research, but I believe that better security models are important for advancing a practice where protocols are verified as secure before deployed.
Summary
In cryptographic protocol theory, we consider a situation where a number of entities want to solve some problem over a computer network. Each entity has some secret data it does not want the other entities to learn, yet, they all want to learn something about the common set of data. In an electronic election, they want to know the number of yes-votes without revealing who voted what. For instance, in an electronic auction, they want to find the winner without leaking the bids of the losers.
A main focus of the project is to develop new techniques for solving such protocol problems. We are in particular interested in techniques which can automatically construct a protocol solving a problem given only a description of what the problem is. My focus will be theoretical basic research, but I believe that advancing the theory of secure protocol compilers will have an immense impact on the practice of developing secure protocols for practice.
When one develops complex protocols, it is important to be able to verify their correctness before they are deployed, in particular so, when the purpose of the protocols is to protect information. If and when an error is found and corrected, the sensitive data will possibly already be compromised. Therefore, cryptographic protocol theory develops models of what it means for a protocol to be secure, and techniques for analyzing whether a given protocol is secure or not.
A main focuses of the project is to develop better security models, as existing security models either suffer from the problem that it is possible to prove some protocols secure which are not secure in practice, or they suffer from the problem that it is impossible to prove security of some protocol which are believed to be secure in practice. My focus will again be on theoretical basic research, but I believe that better security models are important for advancing a practice where protocols are verified as secure before deployed.
Max ERC Funding
1 171 019 €
Duration
Start date: 2011-12-01, End date: 2016-11-30
Project acronym BROWSE
Project Beam-steered Reconfigurable Optical-Wireless System for Energy-efficient communication
Researcher (PI) Antonius Marcellus Jozef Koonen
Host Institution (HI) TECHNISCHE UNIVERSITEIT EINDHOVEN
Call Details Advanced Grant (AdG), PE7, ERC-2011-ADG_20110209
Summary The exploding need for wireless communication capacity is getting beyond the capabilities of traditional radio techniques. The available radio bandwidth gets exhausted, wireless devices start interfering with each other in this overcrowded radio spectrum, and high-capacity radio is power-hungry. Optics can offer a breakthrough, by means of the huge bandwidth of its spectrum, together with intelligent networking.
Our ambition is to make a giant step forward in wireless communications, by a revolutionary combination of novel free-space optical beam diversity techniques, an intelligently routed optical fibre platform, and flexible radio communication techniques. This hybrid technology will increase the available wireless bandwidth by several orders of magnitude, while operating very energy-efficiently.
We will investigate the use of narrowly confined optical pencil beams aided by optical beam tracking for the downstream part of the communication channel, and radio technology for upstream. The optical beams allow extremely high data rates (10-100 Gbit/s) as their carrier frequency is orders of magnitude larger than that of radio waves, and can serve many users without interference due to their spatial confinement. Moreover, they reduce the power consumption by their excellent directivity. In the (less demanding) upstream path, we will explore low-power highly-integrated radio technology for offering capacities of 3-30 Gbit/s. We combine these with intelligent optical routing techniques in the fibre backbone network, and with user localisation and tracking capabilities using advanced upstream radio techniques, in order to deliver ultra-broadband services to every user, tailored for his device. We will explore an autonomic network management and control system to orchestrate the heterogeneous resources and evolve these as the user’s needs, context, device capabilities and energy requirements change.
Summary
The exploding need for wireless communication capacity is getting beyond the capabilities of traditional radio techniques. The available radio bandwidth gets exhausted, wireless devices start interfering with each other in this overcrowded radio spectrum, and high-capacity radio is power-hungry. Optics can offer a breakthrough, by means of the huge bandwidth of its spectrum, together with intelligent networking.
Our ambition is to make a giant step forward in wireless communications, by a revolutionary combination of novel free-space optical beam diversity techniques, an intelligently routed optical fibre platform, and flexible radio communication techniques. This hybrid technology will increase the available wireless bandwidth by several orders of magnitude, while operating very energy-efficiently.
We will investigate the use of narrowly confined optical pencil beams aided by optical beam tracking for the downstream part of the communication channel, and radio technology for upstream. The optical beams allow extremely high data rates (10-100 Gbit/s) as their carrier frequency is orders of magnitude larger than that of radio waves, and can serve many users without interference due to their spatial confinement. Moreover, they reduce the power consumption by their excellent directivity. In the (less demanding) upstream path, we will explore low-power highly-integrated radio technology for offering capacities of 3-30 Gbit/s. We combine these with intelligent optical routing techniques in the fibre backbone network, and with user localisation and tracking capabilities using advanced upstream radio techniques, in order to deliver ultra-broadband services to every user, tailored for his device. We will explore an autonomic network management and control system to orchestrate the heterogeneous resources and evolve these as the user’s needs, context, device capabilities and energy requirements change.
Max ERC Funding
2 430 353 €
Duration
Start date: 2012-09-01, End date: 2017-12-31
Project acronym CCC
Project Cracking the Cerebellar Code
Researcher (PI) Christiaan Innocentius De Zeeuw
Host Institution (HI) ERASMUS UNIVERSITAIR MEDISCH CENTRUM ROTTERDAM
Call Details Advanced Grant (AdG), LS5, ERC-2011-ADG_20110310
Summary Spike trains transfer information to and from neurons. Most studies so far assume that the average firing rate or “rate coding” is the predominant way of information coding. However, spikes occur at millisecond precision, and their actual timing or “temporal coding” can in principle strongly increase the information content of spike trains. The two coding mechanisms are not mutually exclusive. Neurons may switch between rate and temporal coding, or use a combination of both coding mechanisms at the same time, which would increase the information content of spike trains even further. Here, we propose to investigate the hypothesis that temporal coding plays, next to rate coding, important and specific roles in cerebellar processing during learning. The cerebellum is ideal to study this timely topic, because it has a clear anatomy with well-organized modules and matrices, a well-described physiology of different types of neurons with distinguishable spiking activity, and a central role in various forms of tractable motor learning. Moreover, uniquely in the brain, the main types of neurons in the cerebellar system can be genetically manipulated in a cell-specific fashion, which will allow us to investigate the behavioural importance of both coding mechanisms following cell-specific interference and/or during cell-specific visual imaging. Thus, for this proposal we will create conditional mouse mutants that will be subjected to learning paradigms in which we can disentangle the contributions of rate coding and temporal coding using electrophysiological and optogenetic recordings and stimulation. Together, our experiments should elucidate how neurons in the brain communicate during natural learning behaviour and how one may be able to intervene in this process to affect or improve procedural learning skills.
Summary
Spike trains transfer information to and from neurons. Most studies so far assume that the average firing rate or “rate coding” is the predominant way of information coding. However, spikes occur at millisecond precision, and their actual timing or “temporal coding” can in principle strongly increase the information content of spike trains. The two coding mechanisms are not mutually exclusive. Neurons may switch between rate and temporal coding, or use a combination of both coding mechanisms at the same time, which would increase the information content of spike trains even further. Here, we propose to investigate the hypothesis that temporal coding plays, next to rate coding, important and specific roles in cerebellar processing during learning. The cerebellum is ideal to study this timely topic, because it has a clear anatomy with well-organized modules and matrices, a well-described physiology of different types of neurons with distinguishable spiking activity, and a central role in various forms of tractable motor learning. Moreover, uniquely in the brain, the main types of neurons in the cerebellar system can be genetically manipulated in a cell-specific fashion, which will allow us to investigate the behavioural importance of both coding mechanisms following cell-specific interference and/or during cell-specific visual imaging. Thus, for this proposal we will create conditional mouse mutants that will be subjected to learning paradigms in which we can disentangle the contributions of rate coding and temporal coding using electrophysiological and optogenetic recordings and stimulation. Together, our experiments should elucidate how neurons in the brain communicate during natural learning behaviour and how one may be able to intervene in this process to affect or improve procedural learning skills.
Max ERC Funding
2 499 600 €
Duration
Start date: 2012-04-01, End date: 2017-03-31
Project acronym CHEMBIOSPHING
Project Chemical biology of sphingolipids: fundamental studies and clinical applications
Researcher (PI) Herman Steven Overkleeft
Host Institution (HI) UNIVERSITEIT LEIDEN
Call Details Advanced Grant (AdG), PE5, ERC-2011-ADG_20110209
Summary "Sphingolipids are major components of the human cell and are involved in human pathologies ranging from lysosomal storage disorders to type 2 diabetes. Here, we propose to establish an integrated research program for the study of sphingolipid metabolism, in health and disease. We will combine state-of-the-art synthetic organic chemistry, bioorganic chemistry, analytical chemistry, molecular biology and biochemistry techniques and concepts and apply these in an integrated chemical biology approach to study and manipulate sphingolipid metabolism in vivo and in vitro, using human cells and animal models. The program is subdivided in three individual research lines that are interconnected both in terms of technology development and in their biological context. 1) We will develop modified sphinganine derivatives and apply these to study sphingolipid homeostasis in cells derived from healthy and diseased (Gaucher, Fabry, Niemann-Pick A/B disease) individuals/animal models. This question will be addressed in a chemical metabolomics/lipidomics approach. 2) We will develop activity-based probes aimed at monitoring enzyme activity levels of glycosidases involved in (glyco)sphingolipid metabolism, in particular the enzymes that - when mutated and thereby reduced in activity- are responsible for the lysosomal storage disorders Gaucher disease and Fabry disease. 3) We will develop well-defined enzymes and chaperone proteins for directed correction of sphingolipid homeostasis in Gaucher, Fabry and Niemann-Pick A/B patients, via a newly designed semi-synthetic approach that combines sortase-mediated ligation with synthetic chemistry. Deliverables are a better understanding of the composition of the sphingolipid pool that are at the basis of lysosomal storage disorders, effective ways to in situ monitor the efficacy of therapies (enzyme inhibitors, chemical chaperones, recombinant enzymes) to treat these and improved semi-synthetic proteins for enzyme replacement therapy."
Summary
"Sphingolipids are major components of the human cell and are involved in human pathologies ranging from lysosomal storage disorders to type 2 diabetes. Here, we propose to establish an integrated research program for the study of sphingolipid metabolism, in health and disease. We will combine state-of-the-art synthetic organic chemistry, bioorganic chemistry, analytical chemistry, molecular biology and biochemistry techniques and concepts and apply these in an integrated chemical biology approach to study and manipulate sphingolipid metabolism in vivo and in vitro, using human cells and animal models. The program is subdivided in three individual research lines that are interconnected both in terms of technology development and in their biological context. 1) We will develop modified sphinganine derivatives and apply these to study sphingolipid homeostasis in cells derived from healthy and diseased (Gaucher, Fabry, Niemann-Pick A/B disease) individuals/animal models. This question will be addressed in a chemical metabolomics/lipidomics approach. 2) We will develop activity-based probes aimed at monitoring enzyme activity levels of glycosidases involved in (glyco)sphingolipid metabolism, in particular the enzymes that - when mutated and thereby reduced in activity- are responsible for the lysosomal storage disorders Gaucher disease and Fabry disease. 3) We will develop well-defined enzymes and chaperone proteins for directed correction of sphingolipid homeostasis in Gaucher, Fabry and Niemann-Pick A/B patients, via a newly designed semi-synthetic approach that combines sortase-mediated ligation with synthetic chemistry. Deliverables are a better understanding of the composition of the sphingolipid pool that are at the basis of lysosomal storage disorders, effective ways to in situ monitor the efficacy of therapies (enzyme inhibitors, chemical chaperones, recombinant enzymes) to treat these and improved semi-synthetic proteins for enzyme replacement therapy."
Max ERC Funding
2 999 600 €
Duration
Start date: 2012-06-01, End date: 2017-05-31