Project acronym 3D-FM
Project Taking Force Microscopy into the Third Dimension
Researcher (PI) Tjerk Hendrik Oosterkamp
Host Institution (HI) UNIVERSITEIT LEIDEN
Call Details Starting Grant (StG), PE3, ERC-2007-StG
Summary I propose to pursue two emerging Force Microscopy techniques that allow measuring structural properties below the surface of the specimen. Whereas Force Microscopy (most commonly known under the name AFM) is usually limited to measuring the surface topography and surface properties of a specimen, I will demonstrate that Force Microscopy can achieve true 3D images of the structure of the cell nucleus. In Ultrasound Force Microscopy, an ultrasound wave is launched from below towards the surface of the specimen. After the sound waves interact with structures beneath the surface of the specimen, the local variations in the amplitude and phase shift of the ultrasonic surface motion is collected by the Force Microscopy tip. Previously, measured 2D maps of the surface response have shown that the surface response is sensitive to structures below the surface. In this project I will employ miniature AFM cantilevers and nanotube tips that I have already developed in my lab. This will allow me to quickly acquire many such 2D maps at a much wider range of ultrasound frequencies and from these 2D maps calculate the full 3D structure below the surface. I expect this technique to have a resolving power better than 10 nm in three dimensions as far as 2 microns below the surface. In parallel I will introduce a major improvement to a technique based on Nuclear Magnetic Resonance (NMR). Magnetic Resonance Force Microscopy measures the interaction of a rotating nuclear spin in the field gradient of a magnetic Force Microscopy tip. However, these forces are so small that they pose an enormous challenge. Miniature cantilevers and nanotube tips, in combination with additional innovations in the detection of the cantilever motion, can overcome this problem. I expect to be able to measure the combined signal of 100 proton spins or fewer, which will allow me to measure proton densities with a resolution of 5 nm, but possibly even with atomic resolution.
Summary
I propose to pursue two emerging Force Microscopy techniques that allow measuring structural properties below the surface of the specimen. Whereas Force Microscopy (most commonly known under the name AFM) is usually limited to measuring the surface topography and surface properties of a specimen, I will demonstrate that Force Microscopy can achieve true 3D images of the structure of the cell nucleus. In Ultrasound Force Microscopy, an ultrasound wave is launched from below towards the surface of the specimen. After the sound waves interact with structures beneath the surface of the specimen, the local variations in the amplitude and phase shift of the ultrasonic surface motion is collected by the Force Microscopy tip. Previously, measured 2D maps of the surface response have shown that the surface response is sensitive to structures below the surface. In this project I will employ miniature AFM cantilevers and nanotube tips that I have already developed in my lab. This will allow me to quickly acquire many such 2D maps at a much wider range of ultrasound frequencies and from these 2D maps calculate the full 3D structure below the surface. I expect this technique to have a resolving power better than 10 nm in three dimensions as far as 2 microns below the surface. In parallel I will introduce a major improvement to a technique based on Nuclear Magnetic Resonance (NMR). Magnetic Resonance Force Microscopy measures the interaction of a rotating nuclear spin in the field gradient of a magnetic Force Microscopy tip. However, these forces are so small that they pose an enormous challenge. Miniature cantilevers and nanotube tips, in combination with additional innovations in the detection of the cantilever motion, can overcome this problem. I expect to be able to measure the combined signal of 100 proton spins or fewer, which will allow me to measure proton densities with a resolution of 5 nm, but possibly even with atomic resolution.
Max ERC Funding
1 794 960 €
Duration
Start date: 2008-08-01, End date: 2013-07-31
Project acronym 4C
Project 4C technology: uncovering the multi-dimensional structure of the genome
Researcher (PI) Wouter Leonard De Laat
Host Institution (HI) KONINKLIJKE NEDERLANDSE AKADEMIE VAN WETENSCHAPPEN - KNAW
Call Details Starting Grant (StG), LS2, ERC-2007-StG
Summary The architecture of DNA in the cell nucleus is an emerging epigenetic key contributor to genome function. We recently developed 4C technology, a high-throughput technique that combines state-of-the-art 3C technology with tailored micro-arrays to uniquely allow for an unbiased genome-wide search for DNA loci that interact in the nuclear space. Based on 4C technology, we were the first to provide a comprehensive overview of long-range DNA contacts of selected loci. The data showed that active and inactive chromatin domains contact many distinct regions within and between chromosomes and genes switch long-range DNA contacts in relation to their expression status. 4C technology not only allows investigating the three-dimensional structure of DNA in the nucleus, it also accurately reconstructs at least 10 megabases of the one-dimensional chromosome sequence map around the target sequence. Changes in this physical map as a result of genomic rearrangements are therefore identified by 4C technology. We recently demonstrated that 4C detects deletions, balanced inversions and translocations in patient samples at a resolution (~7kb) that allowed immediate sequencing of the breakpoints. Excitingly, 4C technology therefore offers the first high-resolution genomic approach that can identify both balanced and unbalanced genomic rearrangements. 4C is expected to become an important tool in clinical diagnosis and prognosis. Key objectives of this proposal are: 1. Explore the functional significance of DNA folding in the nucleus by systematically applying 4C technology to differentially expressed gene loci. 2. Adapt 4C technology such that it allows for massive parallel analysis of DNA interactions between regulatory elements and gene promoters. This method would greatly facilitate the identification of functionally relevant DNA elements in the genome. 3. Develop 4C technology into a clinical diagnostic tool for the accurate detection of balanced and unbalanced rearrangements.
Summary
The architecture of DNA in the cell nucleus is an emerging epigenetic key contributor to genome function. We recently developed 4C technology, a high-throughput technique that combines state-of-the-art 3C technology with tailored micro-arrays to uniquely allow for an unbiased genome-wide search for DNA loci that interact in the nuclear space. Based on 4C technology, we were the first to provide a comprehensive overview of long-range DNA contacts of selected loci. The data showed that active and inactive chromatin domains contact many distinct regions within and between chromosomes and genes switch long-range DNA contacts in relation to their expression status. 4C technology not only allows investigating the three-dimensional structure of DNA in the nucleus, it also accurately reconstructs at least 10 megabases of the one-dimensional chromosome sequence map around the target sequence. Changes in this physical map as a result of genomic rearrangements are therefore identified by 4C technology. We recently demonstrated that 4C detects deletions, balanced inversions and translocations in patient samples at a resolution (~7kb) that allowed immediate sequencing of the breakpoints. Excitingly, 4C technology therefore offers the first high-resolution genomic approach that can identify both balanced and unbalanced genomic rearrangements. 4C is expected to become an important tool in clinical diagnosis and prognosis. Key objectives of this proposal are: 1. Explore the functional significance of DNA folding in the nucleus by systematically applying 4C technology to differentially expressed gene loci. 2. Adapt 4C technology such that it allows for massive parallel analysis of DNA interactions between regulatory elements and gene promoters. This method would greatly facilitate the identification of functionally relevant DNA elements in the genome. 3. Develop 4C technology into a clinical diagnostic tool for the accurate detection of balanced and unbalanced rearrangements.
Max ERC Funding
1 225 000 €
Duration
Start date: 2008-09-01, End date: 2013-08-31
Project acronym 4D-EEG
Project 4D-EEG: A new tool to investigate the spatial and temporal activity patterns in the brain
Researcher (PI) Franciscus C.T. Van Der Helm
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Advanced Grant (AdG), PE7, ERC-2011-ADG_20110209
Summary Our first goal is to develop a new tool to determine brain activity with a high temporal (< 1 msec) and spatial (about 2 mm) resolution with the focus on motor control. High density EEG (up to 256 electrodes) will be used for EEG source localization. Advanced force-controlled robot manipulators will be used to impose continuous force perturbations to the joints. Advanced closed-loop system identification algorithms will identify the dynamic EEG response of multiple brain areas to the perturbation, leading to a functional interpretation of EEG. The propagation of the signal in time and 3D space through the cortex can be monitored: 4D-EEG. Preliminary experiments with EEG localization have shown that the continuous force perturbations resulted in a better signal-to-noise ratio and coherence than the current method using transient perturbations..
4D-EEG will be a direct measure of the neural activity in the brain with an excellent temporal response and easy to use in combination with motor control tasks. The new 4D-EEG method is expected to provide a breakthrough in comparison to functional MRI (fMRI) when elucidating the meaning of cortical map plasticity in motor learning.
Our second goal is to generate and validate new hypotheses about the longitudinal relationship between motor learning and cortical map plasticity by clinically using 4D-EEG in an intensive, repeated measurement design in patients suffering from a stroke. The application of 4D-EEG combined with haptic robots will allow us to discover how dynamics in cortical map plasticity are related with upper limb recovery after stroke in terms of neural repair and using behavioral compensation strategies while performing a meaningful motor tasks.. The non-invasive 4D-EEG technique combined with haptic robots will open the window about what and how patients (re)learn when showing motor recovery after stroke in order to allow us to develop more effective patient-tailored therapies in neuro-rehabilitation.
Summary
Our first goal is to develop a new tool to determine brain activity with a high temporal (< 1 msec) and spatial (about 2 mm) resolution with the focus on motor control. High density EEG (up to 256 electrodes) will be used for EEG source localization. Advanced force-controlled robot manipulators will be used to impose continuous force perturbations to the joints. Advanced closed-loop system identification algorithms will identify the dynamic EEG response of multiple brain areas to the perturbation, leading to a functional interpretation of EEG. The propagation of the signal in time and 3D space through the cortex can be monitored: 4D-EEG. Preliminary experiments with EEG localization have shown that the continuous force perturbations resulted in a better signal-to-noise ratio and coherence than the current method using transient perturbations..
4D-EEG will be a direct measure of the neural activity in the brain with an excellent temporal response and easy to use in combination with motor control tasks. The new 4D-EEG method is expected to provide a breakthrough in comparison to functional MRI (fMRI) when elucidating the meaning of cortical map plasticity in motor learning.
Our second goal is to generate and validate new hypotheses about the longitudinal relationship between motor learning and cortical map plasticity by clinically using 4D-EEG in an intensive, repeated measurement design in patients suffering from a stroke. The application of 4D-EEG combined with haptic robots will allow us to discover how dynamics in cortical map plasticity are related with upper limb recovery after stroke in terms of neural repair and using behavioral compensation strategies while performing a meaningful motor tasks.. The non-invasive 4D-EEG technique combined with haptic robots will open the window about what and how patients (re)learn when showing motor recovery after stroke in order to allow us to develop more effective patient-tailored therapies in neuro-rehabilitation.
Max ERC Funding
3 477 202 €
Duration
Start date: 2012-06-01, End date: 2017-05-31
Project acronym A-DATADRIVE-B
Project Advanced Data-Driven Black-box modelling
Researcher (PI) Johan Adelia K Suykens
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Advanced Grant (AdG), PE7, ERC-2011-ADG_20110209
Summary Making accurate predictions is a crucial factor in many systems (such as in modelling energy consumption, power load forecasting, traffic networks, process industry, environmental modelling, biomedicine, brain-machine interfaces) for cost savings, efficiency, health, safety and organizational purposes. In this proposal we aim at realizing a new generation of more advanced black-box modelling techniques for estimating predictive models from measured data. We will study different optimization modelling frameworks in order to obtain improved black-box modelling approaches. This will be done by specifying models through constrained optimization problems by studying different candidate core models (parametric models, support vector machines and kernel methods) together with additional sets of constraints and regularization mechanisms. Different candidate mathematical frameworks will be considered with models that possess primal and (Lagrange) dual model representations, functional analysis in reproducing kernel Hilbert spaces, operator splitting and optimization in Banach spaces. Several aspects that are relevant to black-box models will be studied including incorporation of prior knowledge, structured dynamical systems, tensorial data representations, interpretability and sparsity, and general purpose optimization algorithms. The methods should be suitable for handling larger data sets and high dimensional input spaces. The final goal is also to realize a next generation software tool (including symbolic generation of models and handling different supervised and unsupervised learning tasks, static and dynamic systems) that can be generically applied to data from different application areas. The proposal A-DATADRIVE-B aims at getting end-users connected to the more advanced methods through a user-friendly data-driven black-box modelling tool. The methods and tool will be tested in connection to several real-life applications.
Summary
Making accurate predictions is a crucial factor in many systems (such as in modelling energy consumption, power load forecasting, traffic networks, process industry, environmental modelling, biomedicine, brain-machine interfaces) for cost savings, efficiency, health, safety and organizational purposes. In this proposal we aim at realizing a new generation of more advanced black-box modelling techniques for estimating predictive models from measured data. We will study different optimization modelling frameworks in order to obtain improved black-box modelling approaches. This will be done by specifying models through constrained optimization problems by studying different candidate core models (parametric models, support vector machines and kernel methods) together with additional sets of constraints and regularization mechanisms. Different candidate mathematical frameworks will be considered with models that possess primal and (Lagrange) dual model representations, functional analysis in reproducing kernel Hilbert spaces, operator splitting and optimization in Banach spaces. Several aspects that are relevant to black-box models will be studied including incorporation of prior knowledge, structured dynamical systems, tensorial data representations, interpretability and sparsity, and general purpose optimization algorithms. The methods should be suitable for handling larger data sets and high dimensional input spaces. The final goal is also to realize a next generation software tool (including symbolic generation of models and handling different supervised and unsupervised learning tasks, static and dynamic systems) that can be generically applied to data from different application areas. The proposal A-DATADRIVE-B aims at getting end-users connected to the more advanced methods through a user-friendly data-driven black-box modelling tool. The methods and tool will be tested in connection to several real-life applications.
Max ERC Funding
2 485 800 €
Duration
Start date: 2012-04-01, End date: 2017-03-31
Project acronym AAATSI
Project Advanced Antenna Architecture for THZ Sensing Instruments
Researcher (PI) Andrea Neto
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Starting Grant (StG), PE7, ERC-2011-StG_20101014
Summary The Tera-Hertz portion of the spectrum presents unique potentials for advanced applications. Currently the THz spectrum is revealing the mechanisms at the origin of our universe and provides the means to monitor the health of our planet via satellite based sensing of critical gases. Potentially time domain sensing of the THz spectrum will be the ideal tool for a vast variety of medical and security applications.
Presently, systems in the THz regime are extremely expensive and consequently the THz spectrum is still the domain of only niche (expensive) scientific applications. The main problems are the lack of power and sensitivity. The wide unused THz spectral bandwidth is, herself, the only widely available resource that in the future can compensate for these problems. But, so far, when scientists try to really use the bandwidth, they run into an insurmountable physical limit: antenna dispersion. Antenna dispersion modifies the signal’s spectrum in a wavelength dependent manner in all types of radiation, but is particularly deleterious to THz signals because the spectrum is too wide and with foreseeable technology it cannot be digitized.
The goal of this proposal is to introduce break-through antenna technology that will eliminate the dispersion bottle neck and revolutionize Time Domain sensing and Spectroscopic Space Science. Achieving these goals the project will pole vault THz imaging technology into the 21-th century and develop critically important enabling technologies which will satisfy the electrical engineering needs of the next 30 years and in the long run will enable multi Tera-bit wireless communications.
In order to achieve these goals, I will first build upon two major breakthrough radiation mechanisms that I pioneered: Leaky Lenses and Connected Arrays. Eventually, ultra wide band imaging arrays constituted by thousands of components will be designed on the bases of the new theoretical findings and demonstrated.
Summary
The Tera-Hertz portion of the spectrum presents unique potentials for advanced applications. Currently the THz spectrum is revealing the mechanisms at the origin of our universe and provides the means to monitor the health of our planet via satellite based sensing of critical gases. Potentially time domain sensing of the THz spectrum will be the ideal tool for a vast variety of medical and security applications.
Presently, systems in the THz regime are extremely expensive and consequently the THz spectrum is still the domain of only niche (expensive) scientific applications. The main problems are the lack of power and sensitivity. The wide unused THz spectral bandwidth is, herself, the only widely available resource that in the future can compensate for these problems. But, so far, when scientists try to really use the bandwidth, they run into an insurmountable physical limit: antenna dispersion. Antenna dispersion modifies the signal’s spectrum in a wavelength dependent manner in all types of radiation, but is particularly deleterious to THz signals because the spectrum is too wide and with foreseeable technology it cannot be digitized.
The goal of this proposal is to introduce break-through antenna technology that will eliminate the dispersion bottle neck and revolutionize Time Domain sensing and Spectroscopic Space Science. Achieving these goals the project will pole vault THz imaging technology into the 21-th century and develop critically important enabling technologies which will satisfy the electrical engineering needs of the next 30 years and in the long run will enable multi Tera-bit wireless communications.
In order to achieve these goals, I will first build upon two major breakthrough radiation mechanisms that I pioneered: Leaky Lenses and Connected Arrays. Eventually, ultra wide band imaging arrays constituted by thousands of components will be designed on the bases of the new theoretical findings and demonstrated.
Max ERC Funding
1 499 487 €
Duration
Start date: 2011-11-01, End date: 2017-10-31
Project acronym ABACUS
Project Advancing Behavioral and Cognitive Understanding of Speech
Researcher (PI) Bart De Boer
Host Institution (HI) VRIJE UNIVERSITEIT BRUSSEL
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary I intend to investigate what cognitive mechanisms give us combinatorial speech. Combinatorial speech is the ability to make new words using pre-existing speech sounds. Humans are the only apes that can do this, yet we do not know how our brains do it, nor how exactly we differ from other apes. Using new experimental techniques to study human behavior and new computational techniques to model human cognition, I will find out how we deal with combinatorial speech.
The experimental part will study individual and cultural learning. Experimental cultural learning is a new technique that simulates cultural evolution in the laboratory. Two types of cultural learning will be used: iterated learning, which simulates language transfer across generations, and social coordination, which simulates emergence of norms in a language community. Using the two types of cultural learning together with individual learning experiments will help to zero in, from three angles, on how humans deal with combinatorial speech. In addition it will make a methodological contribution by comparing the strengths and weaknesses of the three methods.
The computer modeling part will formalize hypotheses about how our brains deal with combinatorial speech. Two models will be built: a high-level model that will establish the basic algorithms with which combinatorial speech is learned and reproduced, and a neural model that will establish in more detail how the algorithms are implemented in the brain. In addition, the models, through increasing understanding of how humans deal with speech, will help bridge the performance gap between human and computer speech recognition.
The project will advance science in four ways: it will provide insight into how our unique ability for using combinatorial speech works, it will tell us how this is implemented in the brain, it will extend the novel methodology of experimental cultural learning and it will create new computer models for dealing with human speech.
Summary
I intend to investigate what cognitive mechanisms give us combinatorial speech. Combinatorial speech is the ability to make new words using pre-existing speech sounds. Humans are the only apes that can do this, yet we do not know how our brains do it, nor how exactly we differ from other apes. Using new experimental techniques to study human behavior and new computational techniques to model human cognition, I will find out how we deal with combinatorial speech.
The experimental part will study individual and cultural learning. Experimental cultural learning is a new technique that simulates cultural evolution in the laboratory. Two types of cultural learning will be used: iterated learning, which simulates language transfer across generations, and social coordination, which simulates emergence of norms in a language community. Using the two types of cultural learning together with individual learning experiments will help to zero in, from three angles, on how humans deal with combinatorial speech. In addition it will make a methodological contribution by comparing the strengths and weaknesses of the three methods.
The computer modeling part will formalize hypotheses about how our brains deal with combinatorial speech. Two models will be built: a high-level model that will establish the basic algorithms with which combinatorial speech is learned and reproduced, and a neural model that will establish in more detail how the algorithms are implemented in the brain. In addition, the models, through increasing understanding of how humans deal with speech, will help bridge the performance gap between human and computer speech recognition.
The project will advance science in four ways: it will provide insight into how our unique ability for using combinatorial speech works, it will tell us how this is implemented in the brain, it will extend the novel methodology of experimental cultural learning and it will create new computer models for dealing with human speech.
Max ERC Funding
1 276 620 €
Duration
Start date: 2012-02-01, End date: 2017-01-31
Project acronym ABCTRANSPORT
Project Minimalist multipurpose ATP-binding cassette transporters
Researcher (PI) Dirk Jan Slotboom
Host Institution (HI) RIJKSUNIVERSITEIT GRONINGEN
Call Details Starting Grant (StG), LS1, ERC-2011-StG_20101109
Summary Many Gram-positive (pathogenic) bacteria are dependent on the uptake of vitamins from the environment or from the infected host. We have recently discovered the long-elusive family of membrane protein complexes catalyzing such transport. The vitamin transporters have an unprecedented modular architecture consisting of a single multipurpose energizing module (the Energy Coupling Factor, ECF) and multiple exchangeable membrane proteins responsible for substrate recognition (S-components). The S-components have characteristics of ion-gradient driven transporters (secondary active transporters), whereas the energizing modules are related to ATP-binding cassette (ABC) transporters (primary active transporters).
The aim of the proposal is threefold: First, we will address the question how properties of primary and secondary transporters are combined in ECF transporters to obtain a novel transport mechanism. Second, we will study the fundamental and unresolved question how protein-protein recognition takes place in the hydrophobic environment of the lipid bilayer. The modular nature of the ECF proteins offers a natural system to study the driving forces used for membrane protein interaction. Third, we will assess whether the ECF transport systems could become targets for antibacterial drugs. ECF transporters are found exclusively in prokaryotes, and their activity is often essential for viability of Gram-positive pathogens. Thus they could turn out to be an Achilles’ heel for the organisms.
Structural and mechanistic studies (X-ray crystallography, microscopy, spectroscopy and biochemistry) will reveal how the different transport modes are combined in a single protein complex, how transport is energized and catalyzed, and how protein-protein recognition takes place. Microbiological screens will be developed to search for compounds that inhibit prokaryote-specific steps of the mechanism of ECF transporters.
Summary
Many Gram-positive (pathogenic) bacteria are dependent on the uptake of vitamins from the environment or from the infected host. We have recently discovered the long-elusive family of membrane protein complexes catalyzing such transport. The vitamin transporters have an unprecedented modular architecture consisting of a single multipurpose energizing module (the Energy Coupling Factor, ECF) and multiple exchangeable membrane proteins responsible for substrate recognition (S-components). The S-components have characteristics of ion-gradient driven transporters (secondary active transporters), whereas the energizing modules are related to ATP-binding cassette (ABC) transporters (primary active transporters).
The aim of the proposal is threefold: First, we will address the question how properties of primary and secondary transporters are combined in ECF transporters to obtain a novel transport mechanism. Second, we will study the fundamental and unresolved question how protein-protein recognition takes place in the hydrophobic environment of the lipid bilayer. The modular nature of the ECF proteins offers a natural system to study the driving forces used for membrane protein interaction. Third, we will assess whether the ECF transport systems could become targets for antibacterial drugs. ECF transporters are found exclusively in prokaryotes, and their activity is often essential for viability of Gram-positive pathogens. Thus they could turn out to be an Achilles’ heel for the organisms.
Structural and mechanistic studies (X-ray crystallography, microscopy, spectroscopy and biochemistry) will reveal how the different transport modes are combined in a single protein complex, how transport is energized and catalyzed, and how protein-protein recognition takes place. Microbiological screens will be developed to search for compounds that inhibit prokaryote-specific steps of the mechanism of ECF transporters.
Max ERC Funding
1 500 000 €
Duration
Start date: 2012-01-01, End date: 2017-12-31
Project acronym ADDICTION
Project Beyond the Genetics of Addiction
Researcher (PI) Jacqueline Mignon Vink
Host Institution (HI) STICHTING KATHOLIEKE UNIVERSITEIT
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary My proposal seeks to explain the complex interplay between genetic and environmental causes of individual variation in substance use and the risk for abuse. Substance use is common. Substances like nicotine and cannabis have well-known negative health consequences, while alcohol and caffeine use may be both beneficial and detrimental, depending on quantity and frequency of use. Twin studies (including my own) demonstrated that both heritable and environmental factors play a role.
My proposal on substance use (nicotine, alcohol, cannabis and caffeine) is organized around several key objectives: 1. To unravel the complex contribution of genetic and environmental factors to substance use by using extended twin family designs; 2. To identify and confirm genes and gene networks involved in substance use by using DNA-variant data; 3. To explore gene expression patterns with RNA data in substance users versus non-users; 4. To investigate biomarkers in substance users versus non-users using blood or urine; 5. To unravel relation between substance use and health by linking twin-family data to national medical databases.
To realize these aims I will use the extensive resources of the Netherlands Twin Register (NTR); including both the longitudinal phenotype database and the biological samples. I have been involved in data collection, coordination of data collection and analyzing NTR data since 1999. With my comprehensive experience in data collection, data analyses and my knowledge in the field of behavior genetics and addiction research I will be able to successfully lead this cutting-edge project. Additional data crucial for the project will be collected by my team. Large samples will be available for this study and state-of-the art methods will be used to analyze the data. All together, my project will offer powerful approaches to unravel the complex interaction between genetic and environmental causes of individual differences in substance use and the risk for abuse.
Summary
My proposal seeks to explain the complex interplay between genetic and environmental causes of individual variation in substance use and the risk for abuse. Substance use is common. Substances like nicotine and cannabis have well-known negative health consequences, while alcohol and caffeine use may be both beneficial and detrimental, depending on quantity and frequency of use. Twin studies (including my own) demonstrated that both heritable and environmental factors play a role.
My proposal on substance use (nicotine, alcohol, cannabis and caffeine) is organized around several key objectives: 1. To unravel the complex contribution of genetic and environmental factors to substance use by using extended twin family designs; 2. To identify and confirm genes and gene networks involved in substance use by using DNA-variant data; 3. To explore gene expression patterns with RNA data in substance users versus non-users; 4. To investigate biomarkers in substance users versus non-users using blood or urine; 5. To unravel relation between substance use and health by linking twin-family data to national medical databases.
To realize these aims I will use the extensive resources of the Netherlands Twin Register (NTR); including both the longitudinal phenotype database and the biological samples. I have been involved in data collection, coordination of data collection and analyzing NTR data since 1999. With my comprehensive experience in data collection, data analyses and my knowledge in the field of behavior genetics and addiction research I will be able to successfully lead this cutting-edge project. Additional data crucial for the project will be collected by my team. Large samples will be available for this study and state-of-the art methods will be used to analyze the data. All together, my project will offer powerful approaches to unravel the complex interaction between genetic and environmental causes of individual differences in substance use and the risk for abuse.
Max ERC Funding
1 491 964 €
Duration
Start date: 2011-12-01, End date: 2017-05-31
Project acronym ADULT
Project Analysis of the Dark Universe through Lensing Tomography
Researcher (PI) Hendrik Hoekstra
Host Institution (HI) UNIVERSITEIT LEIDEN
Call Details Starting Grant (StG), PE9, ERC-2011-StG_20101014
Summary The discoveries that the expansion of the universe is accelerating due to an unknown “dark energy”
and that most of the matter is invisible, highlight our lack of understanding of the major constituents
of the universe. These surprising findings set the stage for research in cosmology at the start of the
21st century. The objective of this proposal is to advance observational constraints to a level where we can distinguish between physical mechanisms that aim to explain the properties of dark energy and the observed distribution of dark matter throughout the universe. We use a relatively new technique called weak gravitational lensing: the accurate measurement of correlations in the orientations of distant galaxies enables us to map the dark matter distribution directly and to extract the cosmological information that is encoded by the large-scale structure.
To study the dark universe we will analyse data from a new state-of-the-art imaging survey: the Kilo-
Degree Survey (KiDS) will cover 1500 square degrees in 9 filters. The combination of its large survey
area and the availability of exquisite photometric redshifts for the sources makes KiDS the first
project that can place interesting constraints on the dark energy equation-of-state using lensing data
alone. Combined with complementary results from Planck, our measurements will provide one of the
best views of the dark side of the universe before much larger space-based projects commence.
To reach the desired accuracy we need to carefully measure the shapes of distant background galaxies. We also need to account for any intrinsic alignments that arise due to tidal interactions, rather than through lensing. Reducing these observational and physical biases to negligible levels is a necessarystep to ensure the success of KiDS and an important part of our preparation for more challenging projects such as the European-led space mission Euclid.
Summary
The discoveries that the expansion of the universe is accelerating due to an unknown “dark energy”
and that most of the matter is invisible, highlight our lack of understanding of the major constituents
of the universe. These surprising findings set the stage for research in cosmology at the start of the
21st century. The objective of this proposal is to advance observational constraints to a level where we can distinguish between physical mechanisms that aim to explain the properties of dark energy and the observed distribution of dark matter throughout the universe. We use a relatively new technique called weak gravitational lensing: the accurate measurement of correlations in the orientations of distant galaxies enables us to map the dark matter distribution directly and to extract the cosmological information that is encoded by the large-scale structure.
To study the dark universe we will analyse data from a new state-of-the-art imaging survey: the Kilo-
Degree Survey (KiDS) will cover 1500 square degrees in 9 filters. The combination of its large survey
area and the availability of exquisite photometric redshifts for the sources makes KiDS the first
project that can place interesting constraints on the dark energy equation-of-state using lensing data
alone. Combined with complementary results from Planck, our measurements will provide one of the
best views of the dark side of the universe before much larger space-based projects commence.
To reach the desired accuracy we need to carefully measure the shapes of distant background galaxies. We also need to account for any intrinsic alignments that arise due to tidal interactions, rather than through lensing. Reducing these observational and physical biases to negligible levels is a necessarystep to ensure the success of KiDS and an important part of our preparation for more challenging projects such as the European-led space mission Euclid.
Max ERC Funding
1 316 880 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym AGGLONANOCOAT
Project The interplay between agglomeration and coating of nanoparticles in the gas phase
Researcher (PI) Jan Rudolf Van Ommen
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Starting Grant (StG), PE8, ERC-2011-StG_20101014
Summary This proposal aims to develop a generic synthesis approach for core-shell nanoparticles by unravelling the relevant mechanisms. Core-shell nanoparticles have high potential in heterogeneous catalysis, energy storage, and medical applications. However, on a fundamental level there is currently a poor understanding of how to produce such nanostructured particles in a controllable and scalable manner.
The main barriers to achieving this goal are understanding how nanoparticles agglomerate to loose dynamic clusters and controlling the agglomeration process in gas flows during coating, such that uniform coatings can be made. This is very challenging because of the two-way coupling between agglomeration and coating. During the coating we change the particle surfaces and thus the way the particles stick together. Correspondingly, the stickiness of particles determines how easy reactants can reach the surface.
Innovatively the project will be the first systematic study into this multi-scale phenomenon with investigations at all relevant length scales. Current synthesis approaches – mostly carried out in the liquid phase – are typically developed case by case. I will coat nanoparticles in the gas phase with atomic layer deposition (ALD): a technique from the semi-conductor industry that can deposit a wide range of materials. ALD applied to flat substrates offers excellent control over layer thickness. I will investigate the modification of single particle surfaces, particle-particle interaction, the structure of agglomerates, and the flow behaviour of large number of agglomerates. To this end, I will apply a multidisciplinary approach, combining disciplines as physical chemistry, fluid dynamics, and reaction engineering.
Summary
This proposal aims to develop a generic synthesis approach for core-shell nanoparticles by unravelling the relevant mechanisms. Core-shell nanoparticles have high potential in heterogeneous catalysis, energy storage, and medical applications. However, on a fundamental level there is currently a poor understanding of how to produce such nanostructured particles in a controllable and scalable manner.
The main barriers to achieving this goal are understanding how nanoparticles agglomerate to loose dynamic clusters and controlling the agglomeration process in gas flows during coating, such that uniform coatings can be made. This is very challenging because of the two-way coupling between agglomeration and coating. During the coating we change the particle surfaces and thus the way the particles stick together. Correspondingly, the stickiness of particles determines how easy reactants can reach the surface.
Innovatively the project will be the first systematic study into this multi-scale phenomenon with investigations at all relevant length scales. Current synthesis approaches – mostly carried out in the liquid phase – are typically developed case by case. I will coat nanoparticles in the gas phase with atomic layer deposition (ALD): a technique from the semi-conductor industry that can deposit a wide range of materials. ALD applied to flat substrates offers excellent control over layer thickness. I will investigate the modification of single particle surfaces, particle-particle interaction, the structure of agglomerates, and the flow behaviour of large number of agglomerates. To this end, I will apply a multidisciplinary approach, combining disciplines as physical chemistry, fluid dynamics, and reaction engineering.
Max ERC Funding
1 409 952 €
Duration
Start date: 2011-12-01, End date: 2016-11-30