Project acronym BEBOP
Project Bacterial biofilms in porous structures: from biomechanics to control
Researcher (PI) Yohan, Jean-Michel, Louis DAVIT
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Starting Grant (StG), PE8, ERC-2018-STG
Summary The key ideas motivating this project are that: 1) precise control of the properties of porous systems can be obtained by exploiting bacteria and their fantastic abilities; 2) conversely, porous media (large surface to volume ratios, complex structures) could be a major part of bacterial synthetic biology, as a scaffold for growing large quantities of microorganisms in controlled bioreactors.
The main scientific obstacle to precise control of such processes is the lack of understanding of biophysical mechanisms in complex porous structures, even in the case of single-strain biofilms. The central hypothesis of this project is that a better fundamental understanding of biofilm biomechanics and physical ecology will yield a novel theoretical basis for engineering and control.
The first scientific objective is thus to gain insight into how fluid flow, transport phenomena and biofilms interact within connected multiscale heterogeneous structures - a major scientific challenge with wide-ranging implications. To this end, we will combine microfluidic and 3D printed micro-bioreactor experiments; fluorescence and X-ray imaging; high performance computing blending CFD, individual-based models and pore network approaches.
The second scientific objective is to create the primary building blocks toward a control theory of bacteria in porous media and innovative designs of microbial bioreactors. Building upon the previous objective, we first aim to extract from the complexity of biological responses the most universal engineering principles applying to such systems. We will then design a novel porous micro-bioreactor to demonstrate how the permeability and solute residence times can be controlled in a dynamic, reversible and stable way - an initial step toward controlling reaction rates.
We envision that this will unlock a new generation of biotechnologies and novel bioreactor designs enabling translation from proof-of-concept synthetic microbiology to industrial processes.
Summary
The key ideas motivating this project are that: 1) precise control of the properties of porous systems can be obtained by exploiting bacteria and their fantastic abilities; 2) conversely, porous media (large surface to volume ratios, complex structures) could be a major part of bacterial synthetic biology, as a scaffold for growing large quantities of microorganisms in controlled bioreactors.
The main scientific obstacle to precise control of such processes is the lack of understanding of biophysical mechanisms in complex porous structures, even in the case of single-strain biofilms. The central hypothesis of this project is that a better fundamental understanding of biofilm biomechanics and physical ecology will yield a novel theoretical basis for engineering and control.
The first scientific objective is thus to gain insight into how fluid flow, transport phenomena and biofilms interact within connected multiscale heterogeneous structures - a major scientific challenge with wide-ranging implications. To this end, we will combine microfluidic and 3D printed micro-bioreactor experiments; fluorescence and X-ray imaging; high performance computing blending CFD, individual-based models and pore network approaches.
The second scientific objective is to create the primary building blocks toward a control theory of bacteria in porous media and innovative designs of microbial bioreactors. Building upon the previous objective, we first aim to extract from the complexity of biological responses the most universal engineering principles applying to such systems. We will then design a novel porous micro-bioreactor to demonstrate how the permeability and solute residence times can be controlled in a dynamic, reversible and stable way - an initial step toward controlling reaction rates.
We envision that this will unlock a new generation of biotechnologies and novel bioreactor designs enabling translation from proof-of-concept synthetic microbiology to industrial processes.
Max ERC Funding
1 649 861 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym Big Mac
Project Microfluidic Approaches mimicking BIoGeological conditions to investigate subsurface CO2 recycling
Researcher (PI) SAMUEL CHARLES GEORGES MARRE
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Consolidator Grant (CoG), PE8, ERC-2016-COG
Summary The management of anthropogenic CO2 will be one of the main challenges of this century given the dramatic impact of greenhouse gases on our living environment. A fascinating strategy to restore the advantages of stored CO2 as a raw material would be to consider a slow biological upgrading process of CO2 in deep geological formations.
Significantly, the recent development of microfluidic tools to study pore-scale phenomena under high pressure, opens new avenues to investigate such strategies. Thus, the strategic objective of this project is to develop and to use “Biological Geological Laboratories on a Chip - BioGLoCs” mimicking reservoir conditions in order to gain greater understanding in the mechanisms associated with the biogeological conversion process of CO2 to methane in CGS environment at pore scale.
The specific objectives are: (1) to determine the experimental conditions for the development of competent micro-organisms (methanogens) and to establish the methane production rates depending on the operating parameters, (2) to evaluate the feasibility of a H2 in situ production strategy (required to sustain the methanogenesis process), (3) to investigate the full bioconversion process in 2D and 3D, (4) to demonstrate the process scaling from pore scale to liter scale and (5) to evaluate the overall process performance.
This multidisciplinary project gathering expertise in chemical engineering and geomicrobiology will be the first ever use of microfluidics approaches to investigate a biogeological transformation taking into account the thermo-hydro-bio-chemical processes. It will result in the identification of efficient geomicrobiological methods and materials to accelerate the CO2 to methane biogeoconversion process. New generic lab scale tools will be also made available for investigating geological-related topics (enhanced oil recovery, deep geothermal energy, bioremediation of groundwater, shale gas recovery).
Summary
The management of anthropogenic CO2 will be one of the main challenges of this century given the dramatic impact of greenhouse gases on our living environment. A fascinating strategy to restore the advantages of stored CO2 as a raw material would be to consider a slow biological upgrading process of CO2 in deep geological formations.
Significantly, the recent development of microfluidic tools to study pore-scale phenomena under high pressure, opens new avenues to investigate such strategies. Thus, the strategic objective of this project is to develop and to use “Biological Geological Laboratories on a Chip - BioGLoCs” mimicking reservoir conditions in order to gain greater understanding in the mechanisms associated with the biogeological conversion process of CO2 to methane in CGS environment at pore scale.
The specific objectives are: (1) to determine the experimental conditions for the development of competent micro-organisms (methanogens) and to establish the methane production rates depending on the operating parameters, (2) to evaluate the feasibility of a H2 in situ production strategy (required to sustain the methanogenesis process), (3) to investigate the full bioconversion process in 2D and 3D, (4) to demonstrate the process scaling from pore scale to liter scale and (5) to evaluate the overall process performance.
This multidisciplinary project gathering expertise in chemical engineering and geomicrobiology will be the first ever use of microfluidics approaches to investigate a biogeological transformation taking into account the thermo-hydro-bio-chemical processes. It will result in the identification of efficient geomicrobiological methods and materials to accelerate the CO2 to methane biogeoconversion process. New generic lab scale tools will be also made available for investigating geological-related topics (enhanced oil recovery, deep geothermal energy, bioremediation of groundwater, shale gas recovery).
Max ERC Funding
1 995 354 €
Duration
Start date: 2017-11-01, End date: 2022-10-31
Project acronym BigFastData
Project Charting a New Horizon of Big and Fast Data Analysis through Integrated Algorithm Design
Researcher (PI) Yanlei DIAO
Host Institution (HI) ECOLE POLYTECHNIQUE
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary This proposal addresses a pressing need from emerging big data applications such as genomics and data center monitoring: besides the scale of processing, big data systems must also enable perpetual, low-latency processing for a broad set of analytical tasks, referred to as big and fast data analysis. Today’s technology falls severely short for such needs due to the lack of support of complex analytics with scale, low latency, and strong guarantees of user performance requirements. To bridge the gap, this proposal tackles a grand challenge: “How do we design an algorithmic foundation that enables the development of all necessary pillars of big and fast data analysis?” This proposal considers three pillars:
1) Parallelism: There is a fundamental tension between data parallelism (for scale) and pipeline parallelism (for low latency). We propose new approaches based on intelligent use of memory and workload properties to integrate both forms of parallelism.
2) Analytics: The literature lacks a large body of algorithms for critical order-related analytics to be run under data and pipeline parallelism. We propose new algorithmic frameworks to enable such analytics.
3) Optimization: To run analytics, today's big data systems are best effort only. We transform such systems into a principled optimization framework that suits the new characteristics of big data infrastructure and adapts to meet user performance requirements.
The scale and complexity of the proposed algorithm design makes this project high-risk, at the same time, high-gain: it will lay a solid foundation for big and fast data analysis, enabling a new integrated parallel processing paradigm, algorithms for critical order-related analytics, and a principled optimizer with strong performance guarantees. It will also broadly enable accelerated information discovery in emerging domains such as genomics, as well as economic benefits of early, well-informed decisions and reduced user payments.
Summary
This proposal addresses a pressing need from emerging big data applications such as genomics and data center monitoring: besides the scale of processing, big data systems must also enable perpetual, low-latency processing for a broad set of analytical tasks, referred to as big and fast data analysis. Today’s technology falls severely short for such needs due to the lack of support of complex analytics with scale, low latency, and strong guarantees of user performance requirements. To bridge the gap, this proposal tackles a grand challenge: “How do we design an algorithmic foundation that enables the development of all necessary pillars of big and fast data analysis?” This proposal considers three pillars:
1) Parallelism: There is a fundamental tension between data parallelism (for scale) and pipeline parallelism (for low latency). We propose new approaches based on intelligent use of memory and workload properties to integrate both forms of parallelism.
2) Analytics: The literature lacks a large body of algorithms for critical order-related analytics to be run under data and pipeline parallelism. We propose new algorithmic frameworks to enable such analytics.
3) Optimization: To run analytics, today's big data systems are best effort only. We transform such systems into a principled optimization framework that suits the new characteristics of big data infrastructure and adapts to meet user performance requirements.
The scale and complexity of the proposed algorithm design makes this project high-risk, at the same time, high-gain: it will lay a solid foundation for big and fast data analysis, enabling a new integrated parallel processing paradigm, algorithms for critical order-related analytics, and a principled optimizer with strong performance guarantees. It will also broadly enable accelerated information discovery in emerging domains such as genomics, as well as economic benefits of early, well-informed decisions and reduced user payments.
Max ERC Funding
2 472 752 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym BrainConquest
Project Boosting Brain-Computer Communication with high Quality User Training
Researcher (PI) Fabien LOTTE
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE7, ERC-2016-STG
Summary Brain-Computer Interfaces (BCIs) are communication systems that enable users to send commands to computers through brain signals only, by measuring and processing these signals. Making computer control possible without any physical activity, BCIs have promised to revolutionize many application areas, notably assistive technologies, e.g., for wheelchair control, and human-machine interaction. Despite this promising potential, BCIs are still barely used outside laboratories, due to their current poor reliability. For instance, BCIs only using two imagined hand movements as mental commands decode, on average, less than 80% of these commands correctly, while 10 to 30% of users cannot control a BCI at all.
A BCI should be considered a co-adaptive communication system: its users learn to encode commands in their brain signals (with mental imagery) that the machine learns to decode using signal processing. Most research efforts so far have been dedicated to decoding the commands. However, BCI control is a skill that users have to learn too. Unfortunately how BCI users learn to encode the commands is essential but is barely studied, i.e., fundamental knowledge about how users learn BCI control is lacking. Moreover standard training approaches are only based on heuristics, without satisfying human learning principles. Thus, poor BCI reliability is probably largely due to highly suboptimal user training.
In order to obtain a truly reliable BCI we need to completely redefine user training approaches. To do so, I propose to study and statistically model how users learn to encode BCI commands. Then, based on human learning principles and this model, I propose to create a new generation of BCIs which ensure that users learn how to successfully encode commands with high signal-to-noise ratio in their brain signals, hence making BCIs dramatically more reliable. Such a reliable BCI could positively change human-machine interaction as BCIs have promised but failed to do so far.
Summary
Brain-Computer Interfaces (BCIs) are communication systems that enable users to send commands to computers through brain signals only, by measuring and processing these signals. Making computer control possible without any physical activity, BCIs have promised to revolutionize many application areas, notably assistive technologies, e.g., for wheelchair control, and human-machine interaction. Despite this promising potential, BCIs are still barely used outside laboratories, due to their current poor reliability. For instance, BCIs only using two imagined hand movements as mental commands decode, on average, less than 80% of these commands correctly, while 10 to 30% of users cannot control a BCI at all.
A BCI should be considered a co-adaptive communication system: its users learn to encode commands in their brain signals (with mental imagery) that the machine learns to decode using signal processing. Most research efforts so far have been dedicated to decoding the commands. However, BCI control is a skill that users have to learn too. Unfortunately how BCI users learn to encode the commands is essential but is barely studied, i.e., fundamental knowledge about how users learn BCI control is lacking. Moreover standard training approaches are only based on heuristics, without satisfying human learning principles. Thus, poor BCI reliability is probably largely due to highly suboptimal user training.
In order to obtain a truly reliable BCI we need to completely redefine user training approaches. To do so, I propose to study and statistically model how users learn to encode BCI commands. Then, based on human learning principles and this model, I propose to create a new generation of BCIs which ensure that users learn how to successfully encode commands with high signal-to-noise ratio in their brain signals, hence making BCIs dramatically more reliable. Such a reliable BCI could positively change human-machine interaction as BCIs have promised but failed to do so far.
Max ERC Funding
1 498 751 €
Duration
Start date: 2017-07-01, End date: 2022-06-30
Project acronym C0PEP0D
Project Life and death of a virtual copepod in turbulence
Researcher (PI) Christophe ELOY
Host Institution (HI) ECOLE CENTRALE DE MARSEILLE EGIM
Call Details Advanced Grant (AdG), PE8, ERC-2018-ADG
Summary Life is tough for planktonic copepods, constantly washed by turbulent flows. Yet, these millimetric crustaceans dominate the oceans in numbers. What have made them so successful? Copepod antennae are covered with hydrodynamic and chemical sensing hairs that allow copepods to detect preys, predators and mates, although they are blind. How do copepods process this sensing information? How do they extract a meaningful signal from turbulence noise? Today, we do not know.
C0PEP0D hypothesises that reinforcement learning tools can decipher how copepod process hydrodynamic and chemical sensing. Copepods face a problem similar to speech recognition or object detection, two common applications of reinforcement learning. However, copepods only have 1000 neurons, much less than in most artificial neural networks. To approach the simple brain of copepods, we will use Darwinian evolution together with reinforcement learning, with the goal of finding minimal neural networks able to learn.
If we are to build a learning virtual copepod, challenging problems are ahead: we need fast methods to simulate turbulence and animal-flow interactions, new models of hydrodynamic signalling at finite Reynolds number, innovative reinforcement learning algorithms that embrace evolution and experiments with real copepods in turbulence. With these theoretical, numerical and experimental tools, we will address three questions:
Q1: Mating. How do male copepods follow the pheromone trail left by females?
Q2: Finding. How do copepods use hydrodynamic signals to ‘see’?
Q3: Feeding. What are the best feeding strategies in turbulent flow?
C0PEP0D will decipher how copepods process sensing information, but not only that. Because evolution is explicitly considered, it will offer a new perspective on marine ecology and evolution that could inspire artificial sensors. The evolutionary approach of reinforcement learning also offers a promising tool to tackle complex problems in biology and engineering.
Summary
Life is tough for planktonic copepods, constantly washed by turbulent flows. Yet, these millimetric crustaceans dominate the oceans in numbers. What have made them so successful? Copepod antennae are covered with hydrodynamic and chemical sensing hairs that allow copepods to detect preys, predators and mates, although they are blind. How do copepods process this sensing information? How do they extract a meaningful signal from turbulence noise? Today, we do not know.
C0PEP0D hypothesises that reinforcement learning tools can decipher how copepod process hydrodynamic and chemical sensing. Copepods face a problem similar to speech recognition or object detection, two common applications of reinforcement learning. However, copepods only have 1000 neurons, much less than in most artificial neural networks. To approach the simple brain of copepods, we will use Darwinian evolution together with reinforcement learning, with the goal of finding minimal neural networks able to learn.
If we are to build a learning virtual copepod, challenging problems are ahead: we need fast methods to simulate turbulence and animal-flow interactions, new models of hydrodynamic signalling at finite Reynolds number, innovative reinforcement learning algorithms that embrace evolution and experiments with real copepods in turbulence. With these theoretical, numerical and experimental tools, we will address three questions:
Q1: Mating. How do male copepods follow the pheromone trail left by females?
Q2: Finding. How do copepods use hydrodynamic signals to ‘see’?
Q3: Feeding. What are the best feeding strategies in turbulent flow?
C0PEP0D will decipher how copepods process sensing information, but not only that. Because evolution is explicitly considered, it will offer a new perspective on marine ecology and evolution that could inspire artificial sensors. The evolutionary approach of reinforcement learning also offers a promising tool to tackle complex problems in biology and engineering.
Max ERC Funding
2 215 794 €
Duration
Start date: 2019-09-01, End date: 2024-08-31
Project acronym CARB-City
Project Physico-Chemistry of Carbonaceous Aerosol Pollution in Evolving Cities
Researcher (PI) Alma Hodzic
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Consolidator Grant (CoG), PE10, ERC-2018-COG
Summary Carbonaceous aerosols (organic and black carbon) remain a major unresolved issue in atmospheric science, especially in urban centers, where they are one of the dominant aerosol constituents and among most toxic to human health. The challenge is twofold: first, our understanding of the sources, sinks and physico-chemical properties of the complex mixture of carbonaceous species is still incomplete; and second, the representation of urban heterogeneities in air quality models is inadequate as they are designed for regional applications.
The CARB-City project proposes the development of an innovative modeling framework that will address both issues by combining molecular-level chemical constraints and city-scale modeling to achieve the following objectives: (WP1) to develop and apply new chemical parameterizations, constrained by an explicit chemical model, for carbonaceous aerosol formation from urban precursors, and (WP2) to examine whether urban heterogeneities in sources and mixing can enhance non-linearities in chemistry of carbonaceous compounds and modify their predicted composition. The new modeling framework will then be applied (WP3) to quantify the contribution of traditional and emerging urban aerosol precursor sources to chemistry and toxicity of carbonaceous aerosols; and (WP4) to assess the effectiveness of greener-city strategies in removing aerosol pollutants.
This work will enhance fundamental scientific understanding as to how key physico-chemical processes control the lifecycle of carbonaceous aerosols in cities, and will improve the predictability of air quality models in terms of composition and toxicity of urban aerosols, and their sensitivity to changes in energy and land use that cities are currently experiencing. The modeling framework will have the required chemical and spatial resolution for assessing human exposure to urban aerosols. This will allow policy makers to optimize urban emission reductions and sustainable urban development.
Summary
Carbonaceous aerosols (organic and black carbon) remain a major unresolved issue in atmospheric science, especially in urban centers, where they are one of the dominant aerosol constituents and among most toxic to human health. The challenge is twofold: first, our understanding of the sources, sinks and physico-chemical properties of the complex mixture of carbonaceous species is still incomplete; and second, the representation of urban heterogeneities in air quality models is inadequate as they are designed for regional applications.
The CARB-City project proposes the development of an innovative modeling framework that will address both issues by combining molecular-level chemical constraints and city-scale modeling to achieve the following objectives: (WP1) to develop and apply new chemical parameterizations, constrained by an explicit chemical model, for carbonaceous aerosol formation from urban precursors, and (WP2) to examine whether urban heterogeneities in sources and mixing can enhance non-linearities in chemistry of carbonaceous compounds and modify their predicted composition. The new modeling framework will then be applied (WP3) to quantify the contribution of traditional and emerging urban aerosol precursor sources to chemistry and toxicity of carbonaceous aerosols; and (WP4) to assess the effectiveness of greener-city strategies in removing aerosol pollutants.
This work will enhance fundamental scientific understanding as to how key physico-chemical processes control the lifecycle of carbonaceous aerosols in cities, and will improve the predictability of air quality models in terms of composition and toxicity of urban aerosols, and their sensitivity to changes in energy and land use that cities are currently experiencing. The modeling framework will have the required chemical and spatial resolution for assessing human exposure to urban aerosols. This will allow policy makers to optimize urban emission reductions and sustainable urban development.
Max ERC Funding
1 727 009 €
Duration
Start date: 2020-01-01, End date: 2024-12-31
Project acronym CASSANDRA
Project Accelerating mass loss of Greenland: firn and the shifting runoff limit
Researcher (PI) Horst MACHGUTH
Host Institution (HI) UNIVERSITE DE FRIBOURG
Call Details Consolidator Grant (CoG), PE10, ERC-2018-COG
Summary Meltwater running off the flanks of the Greenland ice sheet contributes roughly 60% to its mass loss, the rest being due to calving. Only meltwater originating from below the elevation of the runoff limit leaves the ice sheet, contributing to mass loss; melt at higher elevations refreezes in the porous firn and does not drive mass loss. Therefore any shift in the runoff limit modifies mass loss and subsequent sea level rise. New evidence shows surface runoff at increasingly high elevations, outpacing the rate at which the equilibrium line elevation rises. This research proposal focuses on the runoff limit as a powerful yet poorly understood modulator of Greenland mass balance. We will track the runoff limit over the full satellite era using two of the largest and oldest remote sensing archives, Landsat and the Advanced Very High Resolution Radiometer (AVHRR). We will establish time series of the runoff limit for all regions of Greenland to identify the mechanisms driving fluctuations in the runoff limit. This newly gained process understanding and a wealth of in-situ measurements will then be used to build firn hydrology models capable of simulating runoff and the associated runoff limit over time. Eventually, the firn hydrology models will be applied to reconcile estimates of Greenland past, present and future mass balance. Covering the entire satellite era and all of Greenland, the focus on the runoff limit will constitute a paradigm shift leading to major advance in our understanding of how vulnerable the surface of the ice sheet reacts to climate change and how the changing surface impacts runoff and thus Greenland's role in the global sea level budget.
Summary
Meltwater running off the flanks of the Greenland ice sheet contributes roughly 60% to its mass loss, the rest being due to calving. Only meltwater originating from below the elevation of the runoff limit leaves the ice sheet, contributing to mass loss; melt at higher elevations refreezes in the porous firn and does not drive mass loss. Therefore any shift in the runoff limit modifies mass loss and subsequent sea level rise. New evidence shows surface runoff at increasingly high elevations, outpacing the rate at which the equilibrium line elevation rises. This research proposal focuses on the runoff limit as a powerful yet poorly understood modulator of Greenland mass balance. We will track the runoff limit over the full satellite era using two of the largest and oldest remote sensing archives, Landsat and the Advanced Very High Resolution Radiometer (AVHRR). We will establish time series of the runoff limit for all regions of Greenland to identify the mechanisms driving fluctuations in the runoff limit. This newly gained process understanding and a wealth of in-situ measurements will then be used to build firn hydrology models capable of simulating runoff and the associated runoff limit over time. Eventually, the firn hydrology models will be applied to reconcile estimates of Greenland past, present and future mass balance. Covering the entire satellite era and all of Greenland, the focus on the runoff limit will constitute a paradigm shift leading to major advance in our understanding of how vulnerable the surface of the ice sheet reacts to climate change and how the changing surface impacts runoff and thus Greenland's role in the global sea level budget.
Max ERC Funding
1 989 181 €
Duration
Start date: 2019-05-01, End date: 2024-04-30
Project acronym CENNS
Project Probing new physics with Coherent Elastic Neutrino-Nucleus Scattering and a tabletop experiment
Researcher (PI) Julien Billard
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Starting Grant (StG), PE2, ERC-2018-STG
Summary Ever since the Higgs boson was discovered at the LHC in 2012, we had the confirmation that the Standard Model (SM) of particle physics has to be extended. In parallel, the long lasting Dark Matter (DM) problem, supported by a wealth of evidence ranging from precision cosmology to local astrophysical observations, has been suggesting that new particles should exist. Unfortunately, neither the LHC nor the DM dedicated experiments have significantly detected any exotic signals pointing toward a particular new physics extension of the SM so far.
With this proposal, I want to take a new path in the quest of new physics searches by providing the first high-precision measurement of the neutral current Coherent Elastic Neutrino-Nucleus Scattering (CENNS). By focusing on the sub-100 eV CENNS induced nuclear recoils, my goal is to reach unprecedented sensitivities to various exotic physics scenarios with major implications from cosmology to particle physics, beyond the reach of existing particle physics experiments. These include for instance the existence of sterile neutrinos and of new mediators, that could be related to the DM problem, and the possibility of Non Standard Interactions that would have tremendous implications on the global neutrino physics program.
To this end, I propose to build a kg-scale cryogenic tabletop neutrino experiment with outstanding sensitivity to low-energy nuclear recoils, called CryoCube, that will be deployed at an optimal nuclear reactor site. The key feature of this proposed detector technology is to combine two target materials: Ge-semiconductor and Zn-superconducting metal. I want to push these two detector techniques beyond the state-of-the-art performance to reach sub-100 eV energy thresholds with unparalleled background rejection capabilities.
As my proposed CryoCube detector will reach a 5-sigma level CENNS detection significance in a single day, it will be uniquely positioned to probe new physics extensions beyond the SM.
Summary
Ever since the Higgs boson was discovered at the LHC in 2012, we had the confirmation that the Standard Model (SM) of particle physics has to be extended. In parallel, the long lasting Dark Matter (DM) problem, supported by a wealth of evidence ranging from precision cosmology to local astrophysical observations, has been suggesting that new particles should exist. Unfortunately, neither the LHC nor the DM dedicated experiments have significantly detected any exotic signals pointing toward a particular new physics extension of the SM so far.
With this proposal, I want to take a new path in the quest of new physics searches by providing the first high-precision measurement of the neutral current Coherent Elastic Neutrino-Nucleus Scattering (CENNS). By focusing on the sub-100 eV CENNS induced nuclear recoils, my goal is to reach unprecedented sensitivities to various exotic physics scenarios with major implications from cosmology to particle physics, beyond the reach of existing particle physics experiments. These include for instance the existence of sterile neutrinos and of new mediators, that could be related to the DM problem, and the possibility of Non Standard Interactions that would have tremendous implications on the global neutrino physics program.
To this end, I propose to build a kg-scale cryogenic tabletop neutrino experiment with outstanding sensitivity to low-energy nuclear recoils, called CryoCube, that will be deployed at an optimal nuclear reactor site. The key feature of this proposed detector technology is to combine two target materials: Ge-semiconductor and Zn-superconducting metal. I want to push these two detector techniques beyond the state-of-the-art performance to reach sub-100 eV energy thresholds with unparalleled background rejection capabilities.
As my proposed CryoCube detector will reach a 5-sigma level CENNS detection significance in a single day, it will be uniquely positioned to probe new physics extensions beyond the SM.
Max ERC Funding
1 495 000 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym Chi2-Nano-Oxides
Project Second-Order Nano-Oxides for Enhanced Nonlinear Photonics
Researcher (PI) Rachel GRANGE RODUIT
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE5, ERC-2016-STG
Summary Nonlinear optics is present in our daily life with applications, e.g. light sources for microsurgery or green laser pointer. All of them use bulk materials such as glass fibers or crystals. Generating nonlinear effects from materials at the nanoscale would expand the applications to biology as imaging markers or optoelectronic integrated devices. However, nonlinear signals scale with the volume of a material. Therefore finding materials with high nonlinearities to avoid using high power and large interaction length is challenging. Many studies focus on third order nonlinearities (described by a χ(3) tensor) present in every material (silicon, graphene…) or on metals for enhancing nonlinearities with plasmonics. My approach is to explore second-order χ(2) nanomaterials, since they show higher nonlinearities than χ(3) ones, additional properties such as birefringence, wide band gap for transparency, high refractive index (n>2), and no ohmic losses. Typical χ(2) materials are oxides (BaTiO3, LiNbO3…) with a non-centrosymmetric crystal used for wavelength conversion like in second-harmonic generation (SHG).
The key idea is to demonstrate original strategies to enhance SHG of χ(2) nano-oxides with the material itself and without involving any hybrid effects from other materials such as plasmonic resonances of metals. First, I propose to use multiple Mie resonances from BaTiO3 nanoparticles to boost SHG in the UV to NIR range. Up to now, Mie effects at the nanoscale have been measured in materials with no χ(2) nonlinearities (silicon spheres). Second, since χ(2) oxides are difficult to etch, I will overcome this fabrication issue by demonstrating solution processed imprint lithography to form high-quality photonic crystal cavities from nanoparticles. Third, I will use facet processing of single LiNbO3 nanowire to obtain directionality effects for spectroscopy on-a-chip. This work fosters applications and commercial devices offering a sustainable future to this field.
Summary
Nonlinear optics is present in our daily life with applications, e.g. light sources for microsurgery or green laser pointer. All of them use bulk materials such as glass fibers or crystals. Generating nonlinear effects from materials at the nanoscale would expand the applications to biology as imaging markers or optoelectronic integrated devices. However, nonlinear signals scale with the volume of a material. Therefore finding materials with high nonlinearities to avoid using high power and large interaction length is challenging. Many studies focus on third order nonlinearities (described by a χ(3) tensor) present in every material (silicon, graphene…) or on metals for enhancing nonlinearities with plasmonics. My approach is to explore second-order χ(2) nanomaterials, since they show higher nonlinearities than χ(3) ones, additional properties such as birefringence, wide band gap for transparency, high refractive index (n>2), and no ohmic losses. Typical χ(2) materials are oxides (BaTiO3, LiNbO3…) with a non-centrosymmetric crystal used for wavelength conversion like in second-harmonic generation (SHG).
The key idea is to demonstrate original strategies to enhance SHG of χ(2) nano-oxides with the material itself and without involving any hybrid effects from other materials such as plasmonic resonances of metals. First, I propose to use multiple Mie resonances from BaTiO3 nanoparticles to boost SHG in the UV to NIR range. Up to now, Mie effects at the nanoscale have been measured in materials with no χ(2) nonlinearities (silicon spheres). Second, since χ(2) oxides are difficult to etch, I will overcome this fabrication issue by demonstrating solution processed imprint lithography to form high-quality photonic crystal cavities from nanoparticles. Third, I will use facet processing of single LiNbO3 nanowire to obtain directionality effects for spectroscopy on-a-chip. This work fosters applications and commercial devices offering a sustainable future to this field.
Max ERC Funding
1 500 000 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym CHIC
Project On CHip terahertz frequency Combs
Researcher (PI) Giacomo Scalari
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Consolidator Grant (CoG), PE7, ERC-2016-COG
Summary The terahertz (THz) portion of the electromagnetic spectrum is the junction between optics and electronics. THz is a gate to sensing applications and spectroscopy as well as appealing for material inspection, non-invasive imaging for safety and medical applications and short-range high data rate wireless communication which are being extended to higher frequencies entering the THz range. Optical frequency combs have dominated the scene of laser physics in the last 10 years revolutionizing many fields of optics from metrology to high precision spectroscopy. Optical frequency combs act as rulers in the frequency domain and are characterized by their perfectly equally spaced and coherent modes. An extremely appealing application of optical frequency combs is the so-called dual-comb spectroscopy where multi-heterodyne detection is performed allowing Fourier transform spectroscopy with high resolution, high sensitivity and no moving parts.
The objective of this proposal is to create on-chip, self-referenced frequency combs operating in the spectral region from 1.5-5-5 THz. Two main approaches will be followed: direct generation with THz QC lasers (cryogenically cooled) and room temperature non-linear generation by means of Mid-IR QCL combs. Such devices will be groundbreaking since they will allow high resolution THz spectroscopy and they will pave the way to high-rate local data transmission and coherent communication. We recently demonstrated octave spanning lasing from a THz QCL: this will constitute the foundation of our efforts. The developed combs will be implemented in the extremely powerful dual-comb scheme with innovative on-chip self-stabilization and detection of the multi-heterodyne signals. The self-referencing and the independence from an external detector makes the proposed devices disruptive due to their extreme compactness, intrinsic stability and large bandwidth.
Summary
The terahertz (THz) portion of the electromagnetic spectrum is the junction between optics and electronics. THz is a gate to sensing applications and spectroscopy as well as appealing for material inspection, non-invasive imaging for safety and medical applications and short-range high data rate wireless communication which are being extended to higher frequencies entering the THz range. Optical frequency combs have dominated the scene of laser physics in the last 10 years revolutionizing many fields of optics from metrology to high precision spectroscopy. Optical frequency combs act as rulers in the frequency domain and are characterized by their perfectly equally spaced and coherent modes. An extremely appealing application of optical frequency combs is the so-called dual-comb spectroscopy where multi-heterodyne detection is performed allowing Fourier transform spectroscopy with high resolution, high sensitivity and no moving parts.
The objective of this proposal is to create on-chip, self-referenced frequency combs operating in the spectral region from 1.5-5-5 THz. Two main approaches will be followed: direct generation with THz QC lasers (cryogenically cooled) and room temperature non-linear generation by means of Mid-IR QCL combs. Such devices will be groundbreaking since they will allow high resolution THz spectroscopy and they will pave the way to high-rate local data transmission and coherent communication. We recently demonstrated octave spanning lasing from a THz QCL: this will constitute the foundation of our efforts. The developed combs will be implemented in the extremely powerful dual-comb scheme with innovative on-chip self-stabilization and detection of the multi-heterodyne signals. The self-referencing and the independence from an external detector makes the proposed devices disruptive due to their extreme compactness, intrinsic stability and large bandwidth.
Max ERC Funding
1 999 055 €
Duration
Start date: 2017-03-01, End date: 2022-02-28