Project acronym ATTOSCOPE
Project Measuring attosecond electron dynamics in molecules
Researcher (PI) Hans Jakob Wörner
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE4, ERC-2012-StG_20111012
Summary "The goal of the present proposal is to realize measurements of electronic dynamics in polyatomic
molecules with attosecond temporal resolution (1 as = 10^-18s). We propose to study electronic
rearrangements following photoexcitation, charge migration in a molecular chain induced by
ionization and non-adiabatic multi-electron dynamics in an intense laser field. The grand question
addressed by this research is the characterization of electron correlations which control the shape, properties and function of molecules. In all three proposed projects, a time-domain approach appears to be the most suitable since it reduces complex molecular dynamics to the purely electronic dynamics by exploiting the hierarchy of motional time scales. Experimentally, we propose to realize an innovative experimental setup. A few-cycle infrared (IR) pulse will be used to generate attosecond pulses in the extreme-ultraviolet (XUV) by high-harmonic generation. The IR pulse will be separated from the XUV by means of an innovative interferometer. Additionally, it will permit the introduction of a controlled attosecond delay between the two pulses. We propose to use the attosecond pulses as a tool to look inside individual IR- or UV-field cycles to better understand light-matter interactions. Time-resolved pump-probe experiments will be carried out on polyatomic molecules by detecting the energy and angular distribution of photoelectrons in a velocity-map imaging spectrometer. These experiments are expected to provide new insights
into the dynamics of multi-electron systems along with new results for the validation and
improvement of theoretical models. Multi-electron dynamics is indeed a very complex subject
on its own and even more so in the presence of strong laser fields. The proposed experiments
directly address theses challenges and are expected to provide new insights that will be beneficial to a wide range of scientific research areas."
Summary
"The goal of the present proposal is to realize measurements of electronic dynamics in polyatomic
molecules with attosecond temporal resolution (1 as = 10^-18s). We propose to study electronic
rearrangements following photoexcitation, charge migration in a molecular chain induced by
ionization and non-adiabatic multi-electron dynamics in an intense laser field. The grand question
addressed by this research is the characterization of electron correlations which control the shape, properties and function of molecules. In all three proposed projects, a time-domain approach appears to be the most suitable since it reduces complex molecular dynamics to the purely electronic dynamics by exploiting the hierarchy of motional time scales. Experimentally, we propose to realize an innovative experimental setup. A few-cycle infrared (IR) pulse will be used to generate attosecond pulses in the extreme-ultraviolet (XUV) by high-harmonic generation. The IR pulse will be separated from the XUV by means of an innovative interferometer. Additionally, it will permit the introduction of a controlled attosecond delay between the two pulses. We propose to use the attosecond pulses as a tool to look inside individual IR- or UV-field cycles to better understand light-matter interactions. Time-resolved pump-probe experiments will be carried out on polyatomic molecules by detecting the energy and angular distribution of photoelectrons in a velocity-map imaging spectrometer. These experiments are expected to provide new insights
into the dynamics of multi-electron systems along with new results for the validation and
improvement of theoretical models. Multi-electron dynamics is indeed a very complex subject
on its own and even more so in the presence of strong laser fields. The proposed experiments
directly address theses challenges and are expected to provide new insights that will be beneficial to a wide range of scientific research areas."
Max ERC Funding
1 999 992 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym COMET
Project foundations of COmputational similarity geoMETtry
Researcher (PI) Michael Bronstein
Host Institution (HI) UNIVERSITA DELLA SVIZZERA ITALIANA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Similarity is one of the most fundamental notions encountered in problems practically in every branch of science, and is especially crucial in image sciences such as computer vision and pattern recognition. The need to quantify similarity or dissimilarity of some data is central to broad categories of problems involving comparison, search, matching, alignment, or reconstruction. The most common way to model a similarity is using metrics (distances). Such constructions are well-studied in the field of metric geometry, and there exist numerous computational algorithms allowing, for example, to represent one metric using another by means of isometric embeddings.
However, in many applications such a model appears to be too restrictive: many types of similarity are non-metric; it is not always possible to model the similarity precisely or completely e.g. due to missing data; some objects might be mutually incomparable e.g. if they are coming from different modalities. Such deficiencies of the metric similarity model are especially pronounced in large-scale computer vision, pattern recognition, and medical imaging applications.
The ambitious goal of this project is to introduce a paradigm shift in the way we model and compute similarity. We will develop a unifying framework of computational similarity geometry that extends the theoretical metric model, and will allow developing efficient numerical and computational tools for the representation and computation of generic similarity models. The methods will be developed all the way from mathematical concepts to efficiently implemented code and will be applied to today’s most important and challenging problems in Internet-scale computer vision and pattern recognition, shape analysis, and medical imaging."
Summary
"Similarity is one of the most fundamental notions encountered in problems practically in every branch of science, and is especially crucial in image sciences such as computer vision and pattern recognition. The need to quantify similarity or dissimilarity of some data is central to broad categories of problems involving comparison, search, matching, alignment, or reconstruction. The most common way to model a similarity is using metrics (distances). Such constructions are well-studied in the field of metric geometry, and there exist numerous computational algorithms allowing, for example, to represent one metric using another by means of isometric embeddings.
However, in many applications such a model appears to be too restrictive: many types of similarity are non-metric; it is not always possible to model the similarity precisely or completely e.g. due to missing data; some objects might be mutually incomparable e.g. if they are coming from different modalities. Such deficiencies of the metric similarity model are especially pronounced in large-scale computer vision, pattern recognition, and medical imaging applications.
The ambitious goal of this project is to introduce a paradigm shift in the way we model and compute similarity. We will develop a unifying framework of computational similarity geometry that extends the theoretical metric model, and will allow developing efficient numerical and computational tools for the representation and computation of generic similarity models. The methods will be developed all the way from mathematical concepts to efficiently implemented code and will be applied to today’s most important and challenging problems in Internet-scale computer vision and pattern recognition, shape analysis, and medical imaging."
Max ERC Funding
1 495 020 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym comporel
Project Large-Scale Computational Screening and Design of Highly-ordered pi-conjugated Molecular Precursors to Organic Electronic
Researcher (PI) Anne-Clemence Corminboeuf
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), PE4, ERC-2012-StG_20111012
Summary The field of electronics has been a veritable powerhouse of the economy, driving technological breakthroughs that affect all aspects of everyday life. Aside from silicon, there has been growing interest in developing a novel generation of electronic devices based on pi-conjugated polymers and oligomers. While their goal is not to exceed the performance of silicon technologies, they could enable far reduced fabrication costs as well as completely new functionalities (e.g. mechanical flexibility, transparency, impact resistance). The performance of these organic devices is greatly dependent on the organization and electronic structures of π-conjugated polymer chains at the molecular level. To achieve full potential, technological developments require fine-tuning of the relative orientation/position of the pi-conjugated moieties, which provide a practical means to enhance electronic properties. The discovery pace of novel materials can be accelerated considerably by the development of efficient computational schemes. This requires an integrated approach, based on which the structural, electronic, and charge transport properties of novel molecular candidates are evaluated computationally and predictions benchmarked by proof of principle experiments. This research program aims at developing a threefold computational screening strategy enabling the design of an emerging class of molecular precursors based on the insertion of π-conjugated molecules into self-assembled hydrogen bond aggregator segments (e.g. oligopeptide, nucleotide and carbohydrate motifs). These bioinspired functionalized pi-conjugated systems offer the highly desirable prospect of achieving ordered suprastructures abundant in nature with the enhanced functionalities only observed in synthetic polymers. A more holistic objective is to definitively establish the relationship between highly ordered architectures and the nature of the electronic interactions and charge transfer properties in the assemblies.
Summary
The field of electronics has been a veritable powerhouse of the economy, driving technological breakthroughs that affect all aspects of everyday life. Aside from silicon, there has been growing interest in developing a novel generation of electronic devices based on pi-conjugated polymers and oligomers. While their goal is not to exceed the performance of silicon technologies, they could enable far reduced fabrication costs as well as completely new functionalities (e.g. mechanical flexibility, transparency, impact resistance). The performance of these organic devices is greatly dependent on the organization and electronic structures of π-conjugated polymer chains at the molecular level. To achieve full potential, technological developments require fine-tuning of the relative orientation/position of the pi-conjugated moieties, which provide a practical means to enhance electronic properties. The discovery pace of novel materials can be accelerated considerably by the development of efficient computational schemes. This requires an integrated approach, based on which the structural, electronic, and charge transport properties of novel molecular candidates are evaluated computationally and predictions benchmarked by proof of principle experiments. This research program aims at developing a threefold computational screening strategy enabling the design of an emerging class of molecular precursors based on the insertion of π-conjugated molecules into self-assembled hydrogen bond aggregator segments (e.g. oligopeptide, nucleotide and carbohydrate motifs). These bioinspired functionalized pi-conjugated systems offer the highly desirable prospect of achieving ordered suprastructures abundant in nature with the enhanced functionalities only observed in synthetic polymers. A more holistic objective is to definitively establish the relationship between highly ordered architectures and the nature of the electronic interactions and charge transfer properties in the assemblies.
Max ERC Funding
1 482 240 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym DEPENDABLECLOUD
Project Towards the dependable cloud:
Building the foundations for tomorrow's dependable cloud computing
Researcher (PI) Rodrigo Seromenho Miragaia Rodrigues
Host Institution (HI) INESC ID - INSTITUTO DE ENGENHARIADE SISTEMAS E COMPUTADORES, INVESTIGACAO E DESENVOLVIMENTO EM LISBOA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Cloud computing is being increasingly adopted by individuals, organizations, and governments. However, as the computations that are offloaded to the cloud expand to societal-critical services, the dependability requirements of cloud services become much higher, and we need to ensure that the infrastructure that supports these services is ready to meet these requirements. In particular, this proposal tackles the challenges that arise from two distinctive characteristic of the cloud infrastructure.
The first is that non-crash faults, despite being considered highly unlikely by the designers of traditional systems, become commonplace at the scale and complexity of the cloud infrastructure. We argue that the current ad-hoc methods for handling these faults are insufficient, and that the only principled approach of assuming Byzantine faults is too pessimistic. Therefore, we call for a new systematic approach to tolerating non-crash, non-adversarial faults. This requires the definition of a new fault model, and the construction of a series of building blocks and key protocol elements that enable the construction of fault-tolerant cloud services.
The second issue is that to meet their scalability requirements, cloud services spread their state across multiple data centers, and direct users to the closest one. This raises the issue that not all operations can be executed optimistically, without being aware of concurrent operations over the same data, and thus multiple levels of consistency must coexist. However, this puts the onus of reasoning about which behaviors are allowed under such a hybrid consistency model on the programmer of the service. We propose a systematic solution to this problem, which includes a novel consistency model that allows for developing highly scalable services that are fast when possible and consistent when necessary, and a labeling methodology to guide the programmer in deciding which operations can run at each consistency level.
Summary
Cloud computing is being increasingly adopted by individuals, organizations, and governments. However, as the computations that are offloaded to the cloud expand to societal-critical services, the dependability requirements of cloud services become much higher, and we need to ensure that the infrastructure that supports these services is ready to meet these requirements. In particular, this proposal tackles the challenges that arise from two distinctive characteristic of the cloud infrastructure.
The first is that non-crash faults, despite being considered highly unlikely by the designers of traditional systems, become commonplace at the scale and complexity of the cloud infrastructure. We argue that the current ad-hoc methods for handling these faults are insufficient, and that the only principled approach of assuming Byzantine faults is too pessimistic. Therefore, we call for a new systematic approach to tolerating non-crash, non-adversarial faults. This requires the definition of a new fault model, and the construction of a series of building blocks and key protocol elements that enable the construction of fault-tolerant cloud services.
The second issue is that to meet their scalability requirements, cloud services spread their state across multiple data centers, and direct users to the closest one. This raises the issue that not all operations can be executed optimistically, without being aware of concurrent operations over the same data, and thus multiple levels of consistency must coexist. However, this puts the onus of reasoning about which behaviors are allowed under such a hybrid consistency model on the programmer of the service. We propose a systematic solution to this problem, which includes a novel consistency model that allows for developing highly scalable services that are fast when possible and consistent when necessary, and a labeling methodology to guide the programmer in deciding which operations can run at each consistency level.
Max ERC Funding
1 076 084 €
Duration
Start date: 2012-10-01, End date: 2018-01-31
Project acronym iModel
Project Intelligent Shape Modeling
Researcher (PI) Olga Sorkine
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Digital 3D content creation and modeling has become an indispensable part of our technology-driven society. Any modern design and manufacturing process involves manipulation of digital 3D shapes. Many industries have been long expecting ubiquitous 3D as the next revolution in multimedia. Yet, contrary to “traditional” media such as digital music and video, 3D content creation and editing is not accessible to the general public, and 3D geometric data is not nearly as wide-spread as it has been anticipated. Despite extensive geometric modeling research in the past two decades, 3D modeling is still a restricted domain and demands tedious, time consuming and expensive work effort even from trained professionals, namely engineers, designers, and digital artists. Geometric modeling is reported to constitute one of the lowest-productivity components of product life cycle.
The major reason for 3D shape modeling remaining inaccessible and tedious is that our current geometry representation and modeling algorithms focus on low-level mathematical properties of the shapes, entirely missing structural, contextual or semantic information. As a consequence, current modeling systems are unintuitive, inefficient and difficult for humans to work with. We believe that instead of continuing on the current incremental research path, a concentrated effort is required to fundamentally rethink the shape modeling process and re-align research agendas, putting high-level shape structure and function at the core. We propose a research plan that will lead to intelligent digital 3D modeling tools that integrate semantic knowledge about the objects being modeled and provide the user an intuitive and logical response, fostering creativity and eliminating unnecessary low-level manual modeling tasks. Achieving these goals will represent a fundamental change to our current notion of 3D modeling, and will finally enable us to leverage the true potential of digital 3D content for society.
Summary
Digital 3D content creation and modeling has become an indispensable part of our technology-driven society. Any modern design and manufacturing process involves manipulation of digital 3D shapes. Many industries have been long expecting ubiquitous 3D as the next revolution in multimedia. Yet, contrary to “traditional” media such as digital music and video, 3D content creation and editing is not accessible to the general public, and 3D geometric data is not nearly as wide-spread as it has been anticipated. Despite extensive geometric modeling research in the past two decades, 3D modeling is still a restricted domain and demands tedious, time consuming and expensive work effort even from trained professionals, namely engineers, designers, and digital artists. Geometric modeling is reported to constitute one of the lowest-productivity components of product life cycle.
The major reason for 3D shape modeling remaining inaccessible and tedious is that our current geometry representation and modeling algorithms focus on low-level mathematical properties of the shapes, entirely missing structural, contextual or semantic information. As a consequence, current modeling systems are unintuitive, inefficient and difficult for humans to work with. We believe that instead of continuing on the current incremental research path, a concentrated effort is required to fundamentally rethink the shape modeling process and re-align research agendas, putting high-level shape structure and function at the core. We propose a research plan that will lead to intelligent digital 3D modeling tools that integrate semantic knowledge about the objects being modeled and provide the user an intuitive and logical response, fostering creativity and eliminating unnecessary low-level manual modeling tasks. Achieving these goals will represent a fundamental change to our current notion of 3D modeling, and will finally enable us to leverage the true potential of digital 3D content for society.
Max ERC Funding
1 497 442 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym IMPRO
Project Implicit Programming
Researcher (PI) Viktor Kuncak
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "I propose implicit programming, a paradigm for developing reliable software using new programming language specification constructs and tools, supported through the new notion of software synthesis procedures. The paradigm will enable developers to use specifications as executable programming language constructs and will automate some of the program construction tasks to the point where they become feasible for the end users. Implicit programming will increase developer productivity by enabling developers to focus on the desired software functionality instead of worrying about low-level implementation details. Implicit programming will also improve software reliability, because the presence of specifications will make programs easier to analyze.
From the algorithmic perspective, I propose a new agenda for research in algorithms for decidable logical theories. An input to such an algorithm is a logical formula (or a boolean-valued programming language expressions). Whereas a decision procedure for satisfiability merely checks whether there exists a satisfying assignment for the formula, we propose to develop synthesis procedures. A synthesis procedure views the input as a relation between inputs and outputs, and produces a function from input variables to output variables. In other words, it transforms a specification into a computable function. We will design synthesis procedures for important classes of formulas motivated by useful programming language fragments. We will use synthesis procedures as a compilation mechanism for declarative programming language constructs, ensuring correctness by construction. To develop practical synthesis procedures we will combine insights from decision procedure research (including the results on SMT solvers), with the research on compiler construction, program analysis, and program transformation. The experience from the rich model toolkit initiative (http://RichModels.org) will help us address these goals."
Summary
"I propose implicit programming, a paradigm for developing reliable software using new programming language specification constructs and tools, supported through the new notion of software synthesis procedures. The paradigm will enable developers to use specifications as executable programming language constructs and will automate some of the program construction tasks to the point where they become feasible for the end users. Implicit programming will increase developer productivity by enabling developers to focus on the desired software functionality instead of worrying about low-level implementation details. Implicit programming will also improve software reliability, because the presence of specifications will make programs easier to analyze.
From the algorithmic perspective, I propose a new agenda for research in algorithms for decidable logical theories. An input to such an algorithm is a logical formula (or a boolean-valued programming language expressions). Whereas a decision procedure for satisfiability merely checks whether there exists a satisfying assignment for the formula, we propose to develop synthesis procedures. A synthesis procedure views the input as a relation between inputs and outputs, and produces a function from input variables to output variables. In other words, it transforms a specification into a computable function. We will design synthesis procedures for important classes of formulas motivated by useful programming language fragments. We will use synthesis procedures as a compilation mechanism for declarative programming language constructs, ensuring correctness by construction. To develop practical synthesis procedures we will combine insights from decision procedure research (including the results on SMT solvers), with the research on compiler construction, program analysis, and program transformation. The experience from the rich model toolkit initiative (http://RichModels.org) will help us address these goals."
Max ERC Funding
1 439 240 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym SCADAPT
Project "Large-scale Adaptive Sensing, Learning and Decision Making: Theory and Applications"
Researcher (PI) Rainer Andreas Krause
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "We address one of the fundamental challenges of our time: Acting effectively while facing a deluge of data. Massive volumes of data are generated from corporate and public sources every second, in social, scientific and commercial applications. In addition, more and more low level sensor devices are becoming available and accessible, potentially to the benefit of myriads of applications. However, access to the data is limited, due to computational, bandwidth, power and other limitations. Crucially, simply gathering data is not enough: we need to make decisions based on the information we obtain. Thus, one of the key problems is: How can we obtain most decision-relevant information at minimum cost?
Most existing techniques are either heuristics with no guarantees, or do not scale to large problems. We recently showed that many information gathering problems satisfy submodularity, an intuitive diminishing returns condition. Its exploitation allowed us to develop algorithms with strong guarantees and empirical performance. However, existing algorithms are limited: they cannot cope with dynamic phenomena that change over time, are inherently centralized and thus do not scale with modern, distributed computing paradigms. Perhaps most crucially, they have been designed with the focus of gathering data, but not for making decisions based on this data.
We seek to substantially advance large-scale adaptive decision making under partial observability, by grounding it in the novel computational framework of adaptive submodular optimization. We will develop fundamentally new scalable techniques bridging statistical learning, combinatorial optimization, probabilistic inference and decision theory to overcome the limitations of existing methods. In addition to developing novel theory and algorithms, we will demonstrate the performance of our methods on challenging real world interdisciplinary problems in community sensing, information retrieval and computational sustainability."
Summary
"We address one of the fundamental challenges of our time: Acting effectively while facing a deluge of data. Massive volumes of data are generated from corporate and public sources every second, in social, scientific and commercial applications. In addition, more and more low level sensor devices are becoming available and accessible, potentially to the benefit of myriads of applications. However, access to the data is limited, due to computational, bandwidth, power and other limitations. Crucially, simply gathering data is not enough: we need to make decisions based on the information we obtain. Thus, one of the key problems is: How can we obtain most decision-relevant information at minimum cost?
Most existing techniques are either heuristics with no guarantees, or do not scale to large problems. We recently showed that many information gathering problems satisfy submodularity, an intuitive diminishing returns condition. Its exploitation allowed us to develop algorithms with strong guarantees and empirical performance. However, existing algorithms are limited: they cannot cope with dynamic phenomena that change over time, are inherently centralized and thus do not scale with modern, distributed computing paradigms. Perhaps most crucially, they have been designed with the focus of gathering data, but not for making decisions based on this data.
We seek to substantially advance large-scale adaptive decision making under partial observability, by grounding it in the novel computational framework of adaptive submodular optimization. We will develop fundamentally new scalable techniques bridging statistical learning, combinatorial optimization, probabilistic inference and decision theory to overcome the limitations of existing methods. In addition to developing novel theory and algorithms, we will demonstrate the performance of our methods on challenging real world interdisciplinary problems in community sensing, information retrieval and computational sustainability."
Max ERC Funding
1 499 900 €
Duration
Start date: 2012-11-01, End date: 2017-10-31
Project acronym TOPOPLAN
Project Topographically guided placement of asymmetric nano-objects
Researcher (PI) Armin Wolfgang Knoll
Host Institution (HI) IBM RESEARCH GMBH
Call Details Starting Grant (StG), PE4, ERC-2012-StG_20111012
Summary "The controlled synthesis of nanoparticles in the form of spheres, rods and wires has led to a variety of applications. A much wider spectrum of applications e.g. in integrated devices would be available if a precise placement and alignment relative to neighbouring particles or other functional structures on the substrate is achieved. A potential solution to this challenge is to use top-down methods to guide the placement and orientation of nanoparticles. Ideally, a precise orientation and placement is achieved for a wide range of particle shapes, a so far unresolved challenge.
Here we propose to generate a tunable electrostatic potential minimum by exploiting double-layer potentials between two confining surfaces in liquid. The shape of the potential is determined by the local three-dimensional topography of the confining surfaces. This topography can be precisely tailored using the patterning technology that has been developed in our research group. The potential shape can be adapted to fit to a wide range of particle shapes. The trapping energies exceed the thermal energies governing Brownian motion and trap and orient particles reliably. After trapping, the particles are transferred in a subsequent step onto the substrate by external manipulation.
The separation of the trapping and placement steps has several unique advantages over existing strategies. High aspect ratio structures or fragile pre-assembled structures like nanoparticles linked by DNA strands can be pre-aligned in the trapping field and placed in the desired geometry. For applications like the placement of quantum dots into high fidelity cavities, the trapped particles can be examined optically and repelled if the spectral properties do not match. In particular the precise positioning of nanowires is promising to build up complex circuits for (opto-)electronic applications. Additionally, the trapping and placement processes proceed in parallel and high throughput values can be achieved."
Summary
"The controlled synthesis of nanoparticles in the form of spheres, rods and wires has led to a variety of applications. A much wider spectrum of applications e.g. in integrated devices would be available if a precise placement and alignment relative to neighbouring particles or other functional structures on the substrate is achieved. A potential solution to this challenge is to use top-down methods to guide the placement and orientation of nanoparticles. Ideally, a precise orientation and placement is achieved for a wide range of particle shapes, a so far unresolved challenge.
Here we propose to generate a tunable electrostatic potential minimum by exploiting double-layer potentials between two confining surfaces in liquid. The shape of the potential is determined by the local three-dimensional topography of the confining surfaces. This topography can be precisely tailored using the patterning technology that has been developed in our research group. The potential shape can be adapted to fit to a wide range of particle shapes. The trapping energies exceed the thermal energies governing Brownian motion and trap and orient particles reliably. After trapping, the particles are transferred in a subsequent step onto the substrate by external manipulation.
The separation of the trapping and placement steps has several unique advantages over existing strategies. High aspect ratio structures or fragile pre-assembled structures like nanoparticles linked by DNA strands can be pre-aligned in the trapping field and placed in the desired geometry. For applications like the placement of quantum dots into high fidelity cavities, the trapped particles can be examined optically and repelled if the spectral properties do not match. In particular the precise positioning of nanowires is promising to build up complex circuits for (opto-)electronic applications. Additionally, the trapping and placement processes proceed in parallel and high throughput values can be achieved."
Max ERC Funding
1 496 526 €
Duration
Start date: 2012-10-01, End date: 2017-09-30