Project acronym 2D4QT
Project 2D Materials for Quantum Technology
Researcher (PI) Christoph STAMPFER
Host Institution (HI) RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Call Details Consolidator Grant (CoG), PE3, ERC-2018-COG
Summary Since its discovery, graphene has been indicated as a promising platform for quantum technologies (QT). The number of theoretical proposal dedicated to this vision has grown steadily, exploring a wide range of directions, ranging from spin and valley qubits, to topologically-protected states. The experimental confirmation of these ideas lagged so far significantly behind, mostly because of material quality problems. The quality of graphene-based devices has however improved dramatically in the past five years, thanks to the advent of the so-called van der Waals (vdW) heteostructures - artificial solids formed by mechanically stacking layers of different two dimensional (2D) materials, such as graphene, hexagonal boron nitride and transition metal dichalcogenides. These new advances open now finally the door to put several of those theoretical proposals to test.
The goal of this project is to assess experimentally the potential of graphene-based heterostructures for QT applications. Specifically, I will push the development of an advanced technological platform for vdW heterostructures, which will allow to give quantitative answers to the following open questions: i) what are the relaxation and coherence times of spin and valley qubits in isotopically purified bilayer graphene (BLG); ii) what is the efficiency of a Cooper-pair splitter based on BLG; and iii) what are the characteristic energy scales of topologically protected quantum states engineered in graphene-based heterostructures.
At the end of this project, I aim at being in the position of saying whether graphene is the horse-worth-betting-on predicted by theory, or whether it still hides surprises in terms of fundamental physics. The technological advancements developed in this project for integrating nanostructured layers into vdW heterostructures will reach even beyond this goal, opening the door to new research directions and possible applications.
Summary
Since its discovery, graphene has been indicated as a promising platform for quantum technologies (QT). The number of theoretical proposal dedicated to this vision has grown steadily, exploring a wide range of directions, ranging from spin and valley qubits, to topologically-protected states. The experimental confirmation of these ideas lagged so far significantly behind, mostly because of material quality problems. The quality of graphene-based devices has however improved dramatically in the past five years, thanks to the advent of the so-called van der Waals (vdW) heteostructures - artificial solids formed by mechanically stacking layers of different two dimensional (2D) materials, such as graphene, hexagonal boron nitride and transition metal dichalcogenides. These new advances open now finally the door to put several of those theoretical proposals to test.
The goal of this project is to assess experimentally the potential of graphene-based heterostructures for QT applications. Specifically, I will push the development of an advanced technological platform for vdW heterostructures, which will allow to give quantitative answers to the following open questions: i) what are the relaxation and coherence times of spin and valley qubits in isotopically purified bilayer graphene (BLG); ii) what is the efficiency of a Cooper-pair splitter based on BLG; and iii) what are the characteristic energy scales of topologically protected quantum states engineered in graphene-based heterostructures.
At the end of this project, I aim at being in the position of saying whether graphene is the horse-worth-betting-on predicted by theory, or whether it still hides surprises in terms of fundamental physics. The technological advancements developed in this project for integrating nanostructured layers into vdW heterostructures will reach even beyond this goal, opening the door to new research directions and possible applications.
Max ERC Funding
1 806 250 €
Duration
Start date: 2019-09-01, End date: 2024-08-31
Project acronym 3D-FIREFLUC
Project Taming the particle transport in magnetized plasmas via perturbative fields
Researcher (PI) Eleonora VIEZZER
Host Institution (HI) UNIVERSIDAD DE SEVILLA
Call Details Starting Grant (StG), PE2, ERC-2018-STG
Summary Wave-particle interactions are ubiquitous in nature and play a fundamental role in astrophysical and fusion plasmas. In solar plasmas, magnetohydrodynamic (MHD) fluctuations are thought to be responsible for the heating of the solar corona and the generation of the solar wind. In magnetically confined fusion (MCF) devices, enhanced particle transport induced by MHD fluctuations can deteriorate the plasma confinement, and also endanger the device integrity. MCF devices are an ideal testbed to verify current models and develop mitigation / protection techniques.
The proposed project paves the way for providing active control techniques to tame the MHD induced particle transport in a fusion plasma. A solid understanding of the interaction between energetic particles and MHD instabilities in the presence of electric fields and plasma currents is required to develop such techniques. I will pursue this goal through innovative diagnosis techniques with unprecedented spatio-temporal resolution. Combined with state-of-the-art hybrid MHD codes, a deep insight into the underlying physics mechanism will be gained. The outcome of this research project will have a major impact for next-step MCF devices as I will provide ground-breaking control techniques for mitigating MHD induced particle transport in magnetized plasmas.
The project consists of 3 research lines which follow a bottom-up approach:
(1) Cutting-edge instrumentation, aiming at the new generation of energetic particle and edge current diagnostics.
(2) Unravel the dynamics of energetic particles, electric fields, edge currents and MHD fluctuations.
(3) From lab to space weather: The developed models will revolutionize our understanding of the observed particle acceleration and transport in the solar corona.
Based on this approach, the project represents a gateway between the fusion, astrophysics and space communities opening new avenues for a common basic understanding.
Summary
Wave-particle interactions are ubiquitous in nature and play a fundamental role in astrophysical and fusion plasmas. In solar plasmas, magnetohydrodynamic (MHD) fluctuations are thought to be responsible for the heating of the solar corona and the generation of the solar wind. In magnetically confined fusion (MCF) devices, enhanced particle transport induced by MHD fluctuations can deteriorate the plasma confinement, and also endanger the device integrity. MCF devices are an ideal testbed to verify current models and develop mitigation / protection techniques.
The proposed project paves the way for providing active control techniques to tame the MHD induced particle transport in a fusion plasma. A solid understanding of the interaction between energetic particles and MHD instabilities in the presence of electric fields and plasma currents is required to develop such techniques. I will pursue this goal through innovative diagnosis techniques with unprecedented spatio-temporal resolution. Combined with state-of-the-art hybrid MHD codes, a deep insight into the underlying physics mechanism will be gained. The outcome of this research project will have a major impact for next-step MCF devices as I will provide ground-breaking control techniques for mitigating MHD induced particle transport in magnetized plasmas.
The project consists of 3 research lines which follow a bottom-up approach:
(1) Cutting-edge instrumentation, aiming at the new generation of energetic particle and edge current diagnostics.
(2) Unravel the dynamics of energetic particles, electric fields, edge currents and MHD fluctuations.
(3) From lab to space weather: The developed models will revolutionize our understanding of the observed particle acceleration and transport in the solar corona.
Based on this approach, the project represents a gateway between the fusion, astrophysics and space communities opening new avenues for a common basic understanding.
Max ERC Funding
1 512 250 €
Duration
Start date: 2019-05-01, End date: 2024-04-30
Project acronym 3D-PXM
Project 3D Piezoresponse X-ray Microscopy
Researcher (PI) Hugh SIMONS
Host Institution (HI) DANMARKS TEKNISKE UNIVERSITET
Call Details Starting Grant (StG), PE3, ERC-2018-STG
Summary Polar materials, such as piezoelectrics and ferroelectrics are essential to our modern life, yet they are mostly developed by trial-and-error. Their properties overwhelmingly depend on the defects within them, the majority of which are hidden in the bulk. The road to better materials is via mapping these defects, but our best tool for it – piezoresponse force microscopy (PFM) – is limited to surfaces. 3D-PXM aims to revolutionize our understanding by measuring the local structure-property correlations around individual defects buried deep in the bulk.
This is a completely new kind of microscopy enabling 3D maps of local strain and polarization (i.e. piezoresponse) with 10 nm resolution in mm-sized samples. It is novel, multi-scale and fast enough to capture defect dynamics in real time. Uniquely, it is a full-field method that uses a synthetic-aperture approach to improve both resolution and recover the image phase. This phase is then quantitatively correlated to local polarization and strain via a forward model. 3D-PXM combines advances in X-Ray optics, phase recovery and data analysis to create something transformative. In principle, it can achieve spatial resolution comparable to the best coherent X-Ray microscopy methods while being faster, used on larger samples, and without risk of radiation damage.
For the first time, this opens the door to solving how defects influence bulk properties under real-life conditions. 3D-PXM focuses on three types of defects prevalent in polar materials: grain boundaries, dislocations and polar nanoregions. Individually they address major gaps in the state-of-the-art, while together making great strides towards fully understanding defects. This understanding is expected to inform a new generation of multi-scale models that can account for a material’s full heterogeneity. These models are the first step towards abandoning our tradition of trial-and-error, and with this comes the potential for a new era of polar materials.
Summary
Polar materials, such as piezoelectrics and ferroelectrics are essential to our modern life, yet they are mostly developed by trial-and-error. Their properties overwhelmingly depend on the defects within them, the majority of which are hidden in the bulk. The road to better materials is via mapping these defects, but our best tool for it – piezoresponse force microscopy (PFM) – is limited to surfaces. 3D-PXM aims to revolutionize our understanding by measuring the local structure-property correlations around individual defects buried deep in the bulk.
This is a completely new kind of microscopy enabling 3D maps of local strain and polarization (i.e. piezoresponse) with 10 nm resolution in mm-sized samples. It is novel, multi-scale and fast enough to capture defect dynamics in real time. Uniquely, it is a full-field method that uses a synthetic-aperture approach to improve both resolution and recover the image phase. This phase is then quantitatively correlated to local polarization and strain via a forward model. 3D-PXM combines advances in X-Ray optics, phase recovery and data analysis to create something transformative. In principle, it can achieve spatial resolution comparable to the best coherent X-Ray microscopy methods while being faster, used on larger samples, and without risk of radiation damage.
For the first time, this opens the door to solving how defects influence bulk properties under real-life conditions. 3D-PXM focuses on three types of defects prevalent in polar materials: grain boundaries, dislocations and polar nanoregions. Individually they address major gaps in the state-of-the-art, while together making great strides towards fully understanding defects. This understanding is expected to inform a new generation of multi-scale models that can account for a material’s full heterogeneity. These models are the first step towards abandoning our tradition of trial-and-error, and with this comes the potential for a new era of polar materials.
Max ERC Funding
1 496 941 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym 3D-QUEST
Project 3D-Quantum Integrated Optical Simulation
Researcher (PI) Fabio Sciarrino
Host Institution (HI) UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA
Call Details Starting Grant (StG), PE2, ERC-2012-StG_20111012
Summary "Quantum information was born from the merging of classical information and quantum physics. Its main objective consists of understanding the quantum nature of information and learning how to process it by using physical systems which operate by following quantum mechanics laws. Quantum simulation is a fundamental instrument to investigate phenomena of quantum systems dynamics, such as quantum transport, particle localizations and energy transfer, quantum-to-classical transition, and even quantum improved computation, all tasks that are hard to simulate with classical approaches. Within this framework integrated photonic circuits have a strong potential to realize quantum information processing by optical systems.
The aim of 3D-QUEST is to develop and implement quantum simulation by exploiting 3-dimensional integrated photonic circuits. 3D-QUEST is structured to demonstrate the potential of linear optics to implement a computational power beyond the one of a classical computer. Such ""hard-to-simulate"" scenario is disclosed when multiphoton-multimode platforms are realized. The 3D-QUEST research program will focus on three tasks of growing difficulty.
A-1. To simulate bosonic-fermionic dynamics with integrated optical systems acting on 2 photon entangled states.
A-2. To pave the way towards hard-to-simulate, scalable quantum linear optical circuits by investigating m-port interferometers acting on n-photon states with n>2.
A-3. To exploit 3-dimensional integrated structures for the observation of new quantum optical phenomena and for the quantum simulation of more complex scenarios.
3D-QUEST will exploit the potential of the femtosecond laser writing integrated waveguides. This technique will be adopted to realize 3-dimensional capabilities and high flexibility, bringing in this way the optical quantum simulation in to new regime."
Summary
"Quantum information was born from the merging of classical information and quantum physics. Its main objective consists of understanding the quantum nature of information and learning how to process it by using physical systems which operate by following quantum mechanics laws. Quantum simulation is a fundamental instrument to investigate phenomena of quantum systems dynamics, such as quantum transport, particle localizations and energy transfer, quantum-to-classical transition, and even quantum improved computation, all tasks that are hard to simulate with classical approaches. Within this framework integrated photonic circuits have a strong potential to realize quantum information processing by optical systems.
The aim of 3D-QUEST is to develop and implement quantum simulation by exploiting 3-dimensional integrated photonic circuits. 3D-QUEST is structured to demonstrate the potential of linear optics to implement a computational power beyond the one of a classical computer. Such ""hard-to-simulate"" scenario is disclosed when multiphoton-multimode platforms are realized. The 3D-QUEST research program will focus on three tasks of growing difficulty.
A-1. To simulate bosonic-fermionic dynamics with integrated optical systems acting on 2 photon entangled states.
A-2. To pave the way towards hard-to-simulate, scalable quantum linear optical circuits by investigating m-port interferometers acting on n-photon states with n>2.
A-3. To exploit 3-dimensional integrated structures for the observation of new quantum optical phenomena and for the quantum simulation of more complex scenarios.
3D-QUEST will exploit the potential of the femtosecond laser writing integrated waveguides. This technique will be adopted to realize 3-dimensional capabilities and high flexibility, bringing in this way the optical quantum simulation in to new regime."
Max ERC Funding
1 474 800 €
Duration
Start date: 2012-08-01, End date: 2017-07-31
Project acronym 4DPHOTON
Project Beyond Light Imaging: High-Rate Single-Photon Detection in Four Dimensions
Researcher (PI) Massimiliano FIORINI
Host Institution (HI) ISTITUTO NAZIONALE DI FISICA NUCLEARE
Call Details Consolidator Grant (CoG), PE2, ERC-2018-COG
Summary Goal of the 4DPHOTON project is the development and construction of a photon imaging detector with unprecedented performance. The proposed device will be capable of detecting fluxes of single-photons up to one billion photons per second, over areas of several square centimetres, and will measure - for each photon - position and time simultaneously with resolutions better than ten microns and few tens of picoseconds, respectively. These figures of merit will open many important applications allowing significant advances in particle physics, life sciences or other emerging fields where excellent timing and position resolutions are simultaneously required.
Our goal will be achieved thanks to the use of an application-specific integrated circuit in 65 nm complementary metal-oxide-semiconductor (CMOS) technology, that will deliver a timing resolution of few tens of picoseconds at the pixel level, over few hundred thousand individually-active pixel channels, allowing very high rates of photons to be detected, and the corresponding information digitized and transferred to a processing unit.
As a result of the 4DPHOTON project we will remove the constraints that many light imaging applications have due to the lack of precise single-photon information on four dimensions (4D): the three spatial coordinates and time simultaneously. In particular, we will prove the performance of this detector in the field of particle physics, performing the reconstruction of Cherenkov photon rings with a timing resolution of ten picoseconds. With its excellent granularity, timing resolution, rate capability and compactness, this detector will represent a new paradigm for the realisation of future Ring Imaging Cherenkov detectors, capable of achieving high efficiency particle identification in environments with very high particle multiplicities, exploiting time-association of the photon hits.
Summary
Goal of the 4DPHOTON project is the development and construction of a photon imaging detector with unprecedented performance. The proposed device will be capable of detecting fluxes of single-photons up to one billion photons per second, over areas of several square centimetres, and will measure - for each photon - position and time simultaneously with resolutions better than ten microns and few tens of picoseconds, respectively. These figures of merit will open many important applications allowing significant advances in particle physics, life sciences or other emerging fields where excellent timing and position resolutions are simultaneously required.
Our goal will be achieved thanks to the use of an application-specific integrated circuit in 65 nm complementary metal-oxide-semiconductor (CMOS) technology, that will deliver a timing resolution of few tens of picoseconds at the pixel level, over few hundred thousand individually-active pixel channels, allowing very high rates of photons to be detected, and the corresponding information digitized and transferred to a processing unit.
As a result of the 4DPHOTON project we will remove the constraints that many light imaging applications have due to the lack of precise single-photon information on four dimensions (4D): the three spatial coordinates and time simultaneously. In particular, we will prove the performance of this detector in the field of particle physics, performing the reconstruction of Cherenkov photon rings with a timing resolution of ten picoseconds. With its excellent granularity, timing resolution, rate capability and compactness, this detector will represent a new paradigm for the realisation of future Ring Imaging Cherenkov detectors, capable of achieving high efficiency particle identification in environments with very high particle multiplicities, exploiting time-association of the photon hits.
Max ERC Funding
1 975 000 €
Duration
Start date: 2019-12-01, End date: 2024-11-30
Project acronym 4TH-NU-AVENUE
Project Search for a fourth neutrino with a PBq anti-neutrino source
Researcher (PI) Thierry Michel René Lasserre
Host Institution (HI) COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
Call Details Starting Grant (StG), PE2, ERC-2012-StG_20111012
Summary Several observed anomalies in neutrino oscillation data can be explained by a hypothetical fourth neutrino separated from the three standard neutrinos by a squared mass difference of a few eV2. This hypothesis can be tested with a PBq (ten kilocurie scale) 144Ce antineutrino beta-source deployed at the center of a large low background liquid scintillator detector, such like Borexino, KamLAND, and SNO+. In particular, the compact size of such a source could yield an energy-dependent oscillating pattern in event spatial distribution that would unambiguously determine neutrino mass differences and mixing angles.
The proposed program aims to perform the necessary research and developments to produce and deploy an intense antineutrino source in a large liquid scintillator detector. Our program will address the definition of the production process of the neutrino source as well as its experimental characterization, the detailed physics simulation of both signal and backgrounds, the complete design and the realization of the thick shielding, the preparation of the interfaces with the antineutrino detector, including the safety and security aspects.
Summary
Several observed anomalies in neutrino oscillation data can be explained by a hypothetical fourth neutrino separated from the three standard neutrinos by a squared mass difference of a few eV2. This hypothesis can be tested with a PBq (ten kilocurie scale) 144Ce antineutrino beta-source deployed at the center of a large low background liquid scintillator detector, such like Borexino, KamLAND, and SNO+. In particular, the compact size of such a source could yield an energy-dependent oscillating pattern in event spatial distribution that would unambiguously determine neutrino mass differences and mixing angles.
The proposed program aims to perform the necessary research and developments to produce and deploy an intense antineutrino source in a large liquid scintillator detector. Our program will address the definition of the production process of the neutrino source as well as its experimental characterization, the detailed physics simulation of both signal and backgrounds, the complete design and the realization of the thick shielding, the preparation of the interfaces with the antineutrino detector, including the safety and security aspects.
Max ERC Funding
1 500 000 €
Duration
Start date: 2012-10-01, End date: 2018-09-30
Project acronym ABINITIODGA
Project Ab initio Dynamical Vertex Approximation
Researcher (PI) Karsten Held
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE3, ERC-2012-StG_20111012
Summary Some of the most fascinating physical phenomena are experimentally observed in strongly correlated electron systems and, on the theoretical side, only poorly understood hitherto. The aim of the ERC project AbinitioDGA is the development, implementation and application of a new, 21th century method for the ab initio calculation of materials with such strong electronic correlations. AbinitioDGA includes strong electronic correlations on all time and length scales and hence is a big step beyond the state-of-the-art methods, such as the local density approximation, dynamical mean field theory, and the GW approach (Green function G times screened interaction W). It has the potential for an extraordinary high impact not only in the field of computational materials science but also for a better understanding of quantum critical heavy fermion systems, high-temperature superconductors, and transport through nano- and heterostructures. These four physical problems and related materials will be studied within the ERC project, besides the methodological development.
On the technical side, AbinitioDGA realizes Hedin's idea to include vertex corrections beyond the GW approximation. All vertex corrections which can be traced back to a fully irreducible local vertex and the bare non-local Coulomb interaction are included. This way, AbinitioDGA does not only contain the GW physics of screened exchange and the strong local correlations of dynamical mean field theory but also non-local correlations beyond on all length scales. Through the latter, AbinitioDGA can prospectively describe phenomena such as quantum criticality, spin-fluctuation mediated superconductivity, and weak localization corrections to the conductivity. Nonetheless, the computational effort is still manageable even for realistic materials calculations, making the considerable effort to implement AbinitioDGA worthwhile.
Summary
Some of the most fascinating physical phenomena are experimentally observed in strongly correlated electron systems and, on the theoretical side, only poorly understood hitherto. The aim of the ERC project AbinitioDGA is the development, implementation and application of a new, 21th century method for the ab initio calculation of materials with such strong electronic correlations. AbinitioDGA includes strong electronic correlations on all time and length scales and hence is a big step beyond the state-of-the-art methods, such as the local density approximation, dynamical mean field theory, and the GW approach (Green function G times screened interaction W). It has the potential for an extraordinary high impact not only in the field of computational materials science but also for a better understanding of quantum critical heavy fermion systems, high-temperature superconductors, and transport through nano- and heterostructures. These four physical problems and related materials will be studied within the ERC project, besides the methodological development.
On the technical side, AbinitioDGA realizes Hedin's idea to include vertex corrections beyond the GW approximation. All vertex corrections which can be traced back to a fully irreducible local vertex and the bare non-local Coulomb interaction are included. This way, AbinitioDGA does not only contain the GW physics of screened exchange and the strong local correlations of dynamical mean field theory but also non-local correlations beyond on all length scales. Through the latter, AbinitioDGA can prospectively describe phenomena such as quantum criticality, spin-fluctuation mediated superconductivity, and weak localization corrections to the conductivity. Nonetheless, the computational effort is still manageable even for realistic materials calculations, making the considerable effort to implement AbinitioDGA worthwhile.
Max ERC Funding
1 491 090 €
Duration
Start date: 2013-01-01, End date: 2018-07-31
Project acronym ACTIVIA
Project Visual Recognition of Function and Intention
Researcher (PI) Ivan Laptev
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Computer vision is concerned with the automated interpretation of images and video streams. Today's research is (mostly) aimed at answering queries such as ""Is this a picture of a dog?"", (classification) or sometimes ""Find the dog in this photo"" (detection). While categorisation and detection are useful for many tasks, inferring correct class labels is not the final answer to visual recognition. The categories and locations of objects do not provide direct understanding of their function i.e., how things work, what they can be used for, or how they can act and react. Such an understanding, however, would be highly desirable to answer currently unsolvable queries such as ""Am I in danger?"" or ""What can happen in this scene?"". Solving such queries is the aim of this proposal.
My goal is to uncover the functional properties of objects and the purpose of actions by addressing visual recognition from a different and yet unexplored perspective. The main novelty of this proposal is to leverage observations of people, i.e., their actions and interactions to automatically learn the use, the purpose and the function of objects and scenes from visual data. The project is timely as it builds upon the two key recent technological advances: (a) the immense progress in visual recognition of objects, scenes and human actions achieved in the last ten years, as well as (b) the emergence of a massive amount of public image and video data now available to train visual models.
ACTIVIA addresses fundamental research issues in automated interpretation of dynamic visual scenes, but its results are expected to serve as a basis for ground-breaking technological advances in practical applications. The recognition of functional properties and intentions as explored in this project will directly support high-impact applications such as detection of abnormal events, which are likely to revolutionise today's approaches to crime protection, hazard prevention, elderly care, and many others."
Summary
"Computer vision is concerned with the automated interpretation of images and video streams. Today's research is (mostly) aimed at answering queries such as ""Is this a picture of a dog?"", (classification) or sometimes ""Find the dog in this photo"" (detection). While categorisation and detection are useful for many tasks, inferring correct class labels is not the final answer to visual recognition. The categories and locations of objects do not provide direct understanding of their function i.e., how things work, what they can be used for, or how they can act and react. Such an understanding, however, would be highly desirable to answer currently unsolvable queries such as ""Am I in danger?"" or ""What can happen in this scene?"". Solving such queries is the aim of this proposal.
My goal is to uncover the functional properties of objects and the purpose of actions by addressing visual recognition from a different and yet unexplored perspective. The main novelty of this proposal is to leverage observations of people, i.e., their actions and interactions to automatically learn the use, the purpose and the function of objects and scenes from visual data. The project is timely as it builds upon the two key recent technological advances: (a) the immense progress in visual recognition of objects, scenes and human actions achieved in the last ten years, as well as (b) the emergence of a massive amount of public image and video data now available to train visual models.
ACTIVIA addresses fundamental research issues in automated interpretation of dynamic visual scenes, but its results are expected to serve as a basis for ground-breaking technological advances in practical applications. The recognition of functional properties and intentions as explored in this project will directly support high-impact applications such as detection of abnormal events, which are likely to revolutionise today's approaches to crime protection, hazard prevention, elderly care, and many others."
Max ERC Funding
1 497 420 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym ADAPT
Project Theory and Algorithms for Adaptive Particle Simulation
Researcher (PI) Stephane Redon
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "During the twentieth century, the development of macroscopic engineering has been largely stimulated by progress in digital prototyping: cars, planes, boats, etc. are nowadays designed and tested on computers. Digital prototypes have progressively replaced actual ones, and effective computer-aided engineering tools have helped cut costs and reduce production cycles of these macroscopic systems.
The twenty-first century is most likely to see a similar development at the atomic scale. Indeed, the recent years have seen tremendous progress in nanotechnology - in particular in the ability to control matter at the atomic scale. Similar to what has happened with macroscopic engineering, powerful and generic computational tools will be needed to engineer complex nanosystems, through modeling and simulation. As a result, a major challenge is to develop efficient simulation methods and algorithms.
NANO-D, the INRIA research group I started in January 2008 in Grenoble, France, aims at developing
efficient computational methods for modeling and simulating complex nanosystems, both natural and artificial. In particular, NANO-D develops SAMSON, a software application which gathers all algorithms designed by the group and its collaborators (SAMSON: Software for Adaptive Modeling and Simulation Of Nanosystems).
In this project, I propose to develop a unified theory, and associated algorithms, for adaptive particle simulation. The proposed theory will avoid problems that plague current popular multi-scale or hybrid simulation approaches by simulating a single potential throughout the system, while allowing users to finely trade precision for computational speed.
I believe the full development of the adaptive particle simulation theory will have an important impact on current modeling and simulation practices, and will enable practical design of complex nanosystems on desktop computers, which should significantly boost the emergence of generic nano-engineering."
Summary
"During the twentieth century, the development of macroscopic engineering has been largely stimulated by progress in digital prototyping: cars, planes, boats, etc. are nowadays designed and tested on computers. Digital prototypes have progressively replaced actual ones, and effective computer-aided engineering tools have helped cut costs and reduce production cycles of these macroscopic systems.
The twenty-first century is most likely to see a similar development at the atomic scale. Indeed, the recent years have seen tremendous progress in nanotechnology - in particular in the ability to control matter at the atomic scale. Similar to what has happened with macroscopic engineering, powerful and generic computational tools will be needed to engineer complex nanosystems, through modeling and simulation. As a result, a major challenge is to develop efficient simulation methods and algorithms.
NANO-D, the INRIA research group I started in January 2008 in Grenoble, France, aims at developing
efficient computational methods for modeling and simulating complex nanosystems, both natural and artificial. In particular, NANO-D develops SAMSON, a software application which gathers all algorithms designed by the group and its collaborators (SAMSON: Software for Adaptive Modeling and Simulation Of Nanosystems).
In this project, I propose to develop a unified theory, and associated algorithms, for adaptive particle simulation. The proposed theory will avoid problems that plague current popular multi-scale or hybrid simulation approaches by simulating a single potential throughout the system, while allowing users to finely trade precision for computational speed.
I believe the full development of the adaptive particle simulation theory will have an important impact on current modeling and simulation practices, and will enable practical design of complex nanosystems on desktop computers, which should significantly boost the emergence of generic nano-engineering."
Max ERC Funding
1 476 882 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym ADAPT
Project The Adoption of New Technological Arrays in the Production of Broadcast Television
Researcher (PI) John Cyril Paget Ellis
Host Institution (HI) ROYAL HOLLOWAY AND BEDFORD NEW COLLEGE
Call Details Advanced Grant (AdG), SH5, ERC-2012-ADG_20120411
Summary "Since 1960, the television industry has undergone successive waves of technological change. Both the methods of programme making and the programmes themselves have changed substantially. The current opening of TV’s vast archives to public and academic use has emphasised the need to explain old programming to new users. Why particular programmes are like they are is not obvious to the contemporary viewer: the prevailing technologies imposed limits and enabled forms that have fallen into disuse. The project will examine the processes of change which gave rise to the particular dominant configurations of technologies for sound and image capture and processing, and some idea of the national and regional variants that existed. It will emphasise the capabilities of the machines in use rather than the process of their invention. The project therefore studies how the technologies of film and tape were implemented; how both broadcasters and individual filmers coped with the conflicting demands of the different machines at their disposal; how new ‘standard ways of doing things’ gradually emerged; and how all of this enabled desired changes in the resultant programmes. The project will produce an overall written account of the principal changes in the technologies in use in broadcast TV since 1960 to the near present. It will offer a theory of technological innovation, and a major case study in the adoption of digital workflow management in production for broadcasting: the so-called ‘tapeless environment’ which is currently being implemented in major organisations. It will offer two historical case studies: a longditudinal study of the evolution of tape-based sound recording and one of the rapid change from 16mm film cutting to digital editing, a process that took less than five years. Reconstructions of the process of working with particular technological arrays will be filmed and will be made available as explanatory material for any online archive of TV material ."
Summary
"Since 1960, the television industry has undergone successive waves of technological change. Both the methods of programme making and the programmes themselves have changed substantially. The current opening of TV’s vast archives to public and academic use has emphasised the need to explain old programming to new users. Why particular programmes are like they are is not obvious to the contemporary viewer: the prevailing technologies imposed limits and enabled forms that have fallen into disuse. The project will examine the processes of change which gave rise to the particular dominant configurations of technologies for sound and image capture and processing, and some idea of the national and regional variants that existed. It will emphasise the capabilities of the machines in use rather than the process of their invention. The project therefore studies how the technologies of film and tape were implemented; how both broadcasters and individual filmers coped with the conflicting demands of the different machines at their disposal; how new ‘standard ways of doing things’ gradually emerged; and how all of this enabled desired changes in the resultant programmes. The project will produce an overall written account of the principal changes in the technologies in use in broadcast TV since 1960 to the near present. It will offer a theory of technological innovation, and a major case study in the adoption of digital workflow management in production for broadcasting: the so-called ‘tapeless environment’ which is currently being implemented in major organisations. It will offer two historical case studies: a longditudinal study of the evolution of tape-based sound recording and one of the rapid change from 16mm film cutting to digital editing, a process that took less than five years. Reconstructions of the process of working with particular technological arrays will be filmed and will be made available as explanatory material for any online archive of TV material ."
Max ERC Funding
1 680 121 €
Duration
Start date: 2013-08-01, End date: 2018-07-31