Project acronym FELICITY
Project Foundations of Efficient Lattice Cryptography
Researcher (PI) Vadim Lyubashevsky
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Public key cryptography is the backbone of internet security. Yet it is very likely that within the next few decades some government or corporate entity will succeed in building a general-purpose quantum computer that is capable of breaking all of today's public key protocols. Lattice cryptography, which appears to be resilient to quantum attacks, is currently viewed as the most promising candidate to take over as the basis for cryptography in the future. Recent theoretical breakthroughs have additionally shown that lattice cryptography may even allow for constructions of primitives with novel capabilities. But even though the progress in this latter area has been considerable, the resulting schemes are still extremely impractical.
The central objective of the FELICITY project is to substantially expand the boundaries of efficient lattice-based cryptography. This includes improving on the most crucial cryptographic protocols, some of which are already considered practical, as well as pushing towards efficiency in areas that currently seem out of reach. The methodology that we propose to use differs from the bulk of the research being done today. Rather than directly working on advanced primitives in which practical considerations are ignored, the focus of the project will be on finding novel ways in which to break the most fundamental barriers that are standing in the way of practicality. For this, I believe it is productive to concentrate on building schemes that stand at the frontier of what is considered efficient -- because it is there that the most critical barriers are most apparent. And since cryptographic techniques usually propagate themselves from simple to advanced primitives, improved solutions for the fundamental ones will eventually serve as building blocks for practical constructions of schemes having advanced capabilities.
Summary
Public key cryptography is the backbone of internet security. Yet it is very likely that within the next few decades some government or corporate entity will succeed in building a general-purpose quantum computer that is capable of breaking all of today's public key protocols. Lattice cryptography, which appears to be resilient to quantum attacks, is currently viewed as the most promising candidate to take over as the basis for cryptography in the future. Recent theoretical breakthroughs have additionally shown that lattice cryptography may even allow for constructions of primitives with novel capabilities. But even though the progress in this latter area has been considerable, the resulting schemes are still extremely impractical.
The central objective of the FELICITY project is to substantially expand the boundaries of efficient lattice-based cryptography. This includes improving on the most crucial cryptographic protocols, some of which are already considered practical, as well as pushing towards efficiency in areas that currently seem out of reach. The methodology that we propose to use differs from the bulk of the research being done today. Rather than directly working on advanced primitives in which practical considerations are ignored, the focus of the project will be on finding novel ways in which to break the most fundamental barriers that are standing in the way of practicality. For this, I believe it is productive to concentrate on building schemes that stand at the frontier of what is considered efficient -- because it is there that the most critical barriers are most apparent. And since cryptographic techniques usually propagate themselves from simple to advanced primitives, improved solutions for the fundamental ones will eventually serve as building blocks for practical constructions of schemes having advanced capabilities.
Max ERC Funding
1 311 688 €
Duration
Start date: 2015-10-01, End date: 2020-09-30
Project acronym FICOMOL
Project Field Control of Cold Molecular Collisions
Researcher (PI) Sebastiaan Y T VAN DE MEERAKKER
Host Institution (HI) STICHTING KATHOLIEKE UNIVERSITEIT
Call Details Consolidator Grant (CoG), PE4, ERC-2018-COG
Summary It is a long held dream of chemical physicists to study (and to control!) the interactions between individual molecules in completely specified collisions. This project brings this goal within reach. I will develop novel methods to study collisions between individual molecules at temperatures between 10 mK and 10 K, and to manipulate their interaction using electric and magnetic fields. Under these cold conditions, the collisions are dominated by quantum effects such as interference and tunneling. Scattering resonances occur that respond sensitively to external electric or magnetic fields, yielding the thrilling perspective to provide “control knobs” to steer the outcome of a collision. Building on my unique experience with state-of-the-art molecular beam deceleration methods, I will study scattering resonances for chemically relevant systems involving molecules such as OH, NO, NH3 and H2CO in crossed beam experiments. Using external electric or magnetic fields, we will tune the positions and widths of resonances, such that collision rates can be changed by orders of magnitude. This type of “collision engineering” will be used to induce and study hitherto unexplored quantum phenomena, such as the merging of individual resonances, and resonant energy transfer in bimolecular collisions. Measurements of exotic collision phenomena under yet unexplored conditions as proposed here provide excellent tests for quantum theories of molecular interactions, and pave the way towards the engineering of novel quantum structures, or the collective properties of interacting molecular systems. The proposed research program will transform this field from merely “probing nature” with the highest possible detail to “manipulating nature” with the highest possible level of control. It will open up a new and intellectually rich research field in chemical physics and physical chemistry, and will be a major breakthrough in the emerging research field of cold molecules.
Summary
It is a long held dream of chemical physicists to study (and to control!) the interactions between individual molecules in completely specified collisions. This project brings this goal within reach. I will develop novel methods to study collisions between individual molecules at temperatures between 10 mK and 10 K, and to manipulate their interaction using electric and magnetic fields. Under these cold conditions, the collisions are dominated by quantum effects such as interference and tunneling. Scattering resonances occur that respond sensitively to external electric or magnetic fields, yielding the thrilling perspective to provide “control knobs” to steer the outcome of a collision. Building on my unique experience with state-of-the-art molecular beam deceleration methods, I will study scattering resonances for chemically relevant systems involving molecules such as OH, NO, NH3 and H2CO in crossed beam experiments. Using external electric or magnetic fields, we will tune the positions and widths of resonances, such that collision rates can be changed by orders of magnitude. This type of “collision engineering” will be used to induce and study hitherto unexplored quantum phenomena, such as the merging of individual resonances, and resonant energy transfer in bimolecular collisions. Measurements of exotic collision phenomena under yet unexplored conditions as proposed here provide excellent tests for quantum theories of molecular interactions, and pave the way towards the engineering of novel quantum structures, or the collective properties of interacting molecular systems. The proposed research program will transform this field from merely “probing nature” with the highest possible detail to “manipulating nature” with the highest possible level of control. It will open up a new and intellectually rich research field in chemical physics and physical chemistry, and will be a major breakthrough in the emerging research field of cold molecules.
Max ERC Funding
2 000 000 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym FLEXILOG
Project Formal lexically informed logics for searching the web
Researcher (PI) Steven Schockaert
Host Institution (HI) CARDIFF UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Semantic search engines use structured knowledge to improve traditional web search, e.g. by directly answering questions from users. Current approaches to semantic search rely on the unrealistic assumption that all true facts about a given domain are explicitly stated in their knowledge base or on the web. To reach their full potential, semantic search engines need the ability to reason about known facts. However, existing logics cannot adequately deal with the imperfect nature of knowledge from the web. One problem is that relevant information tends to be distributed over several heterogeneous knowledge bases that are inconsistent with each other. Moreover, domain theories are seldom complete, which means that a form of so-called plausible reasoning is needed. Finally, as relevant logical theories do not exist for many domains, reasoning may need to rely on imperfect probabilistic theories that have been learned from the web.
To overcome these challenges, FLEXILOG will introduce a family of logics for robust reasoning with messy real-world knowledge, based on vector-space representations of natural language terms (i.e. of lexical knowledge). In particular, we will use lexical knowledge to estimate the plausibility of logical models, using conceptual simplicity as a proxy for plausibility (i.e. Occam’s razor). This will enable us to implement various forms of commonsense reasoning, equipping classical logic with the ability to draw plausible conclusions based on regularities that are observed in a knowledge base. We will then generalise our approach to probabilistic logics, and show how we can use the resulting lexically informed probabilistic logics to learn accurate and comprehensive domain theories from the web. This project will enable a robust data-driven approach to logic-based semantic search, and more generally lead to fundamental progress in a variety of knowledge-intensive applications for which logical inference has traditionally been too brittle.
Summary
Semantic search engines use structured knowledge to improve traditional web search, e.g. by directly answering questions from users. Current approaches to semantic search rely on the unrealistic assumption that all true facts about a given domain are explicitly stated in their knowledge base or on the web. To reach their full potential, semantic search engines need the ability to reason about known facts. However, existing logics cannot adequately deal with the imperfect nature of knowledge from the web. One problem is that relevant information tends to be distributed over several heterogeneous knowledge bases that are inconsistent with each other. Moreover, domain theories are seldom complete, which means that a form of so-called plausible reasoning is needed. Finally, as relevant logical theories do not exist for many domains, reasoning may need to rely on imperfect probabilistic theories that have been learned from the web.
To overcome these challenges, FLEXILOG will introduce a family of logics for robust reasoning with messy real-world knowledge, based on vector-space representations of natural language terms (i.e. of lexical knowledge). In particular, we will use lexical knowledge to estimate the plausibility of logical models, using conceptual simplicity as a proxy for plausibility (i.e. Occam’s razor). This will enable us to implement various forms of commonsense reasoning, equipping classical logic with the ability to draw plausible conclusions based on regularities that are observed in a knowledge base. We will then generalise our approach to probabilistic logics, and show how we can use the resulting lexically informed probabilistic logics to learn accurate and comprehensive domain theories from the web. This project will enable a robust data-driven approach to logic-based semantic search, and more generally lead to fundamental progress in a variety of knowledge-intensive applications for which logical inference has traditionally been too brittle.
Max ERC Funding
1 451 656 €
Duration
Start date: 2015-05-01, End date: 2020-04-30
Project acronym FOC
Project Foundations of Cryptographic Hardness
Researcher (PI) Iftach Ilan Haitner
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary A fundamental research challenge in modern cryptography is understanding the necessary hardness assumptions required to build different cryptographic primitives. Attempts to answer this question have gained tremendous success in the last 20-30 years. Most notably, it was shown that many highly complicated primitives can be based on the mere existence of one-way functions (i.e., easy to compute and hard to invert), while other primitives cannot be based on such functions. This research has yielded fundamental tools and concepts such as randomness extractors and computational notions of entropy. Yet many of the most fundamental questions remain unanswered.
Our first goal is to answer the fundamental question of whether cryptography can be based on the assumption that P not equal NP. Our second and third goals are to build a more efficient symmetric-key cryptographic primitives from one-way functions, and to establish effective methods for security amplification of cryptographic primitives. Succeeding in the second and last goals is likely to have great bearing on the way that we construct the very basic cryptographic primitives. A positive answer for the first question will be considered a dramatic result in the cryptography and computational complexity communities.
To address these goals, it is very useful to understand the relationship between different types and quantities of cryptographic hardness. Such understanding typically involves defining and manipulating different types of computational entropy, and comprehending the power of security reductions. We believe that this research will yield new concepts and techniques, with ramification beyond the realm of foundational cryptography.
Summary
A fundamental research challenge in modern cryptography is understanding the necessary hardness assumptions required to build different cryptographic primitives. Attempts to answer this question have gained tremendous success in the last 20-30 years. Most notably, it was shown that many highly complicated primitives can be based on the mere existence of one-way functions (i.e., easy to compute and hard to invert), while other primitives cannot be based on such functions. This research has yielded fundamental tools and concepts such as randomness extractors and computational notions of entropy. Yet many of the most fundamental questions remain unanswered.
Our first goal is to answer the fundamental question of whether cryptography can be based on the assumption that P not equal NP. Our second and third goals are to build a more efficient symmetric-key cryptographic primitives from one-way functions, and to establish effective methods for security amplification of cryptographic primitives. Succeeding in the second and last goals is likely to have great bearing on the way that we construct the very basic cryptographic primitives. A positive answer for the first question will be considered a dramatic result in the cryptography and computational complexity communities.
To address these goals, it is very useful to understand the relationship between different types and quantities of cryptographic hardness. Such understanding typically involves defining and manipulating different types of computational entropy, and comprehending the power of security reductions. We believe that this research will yield new concepts and techniques, with ramification beyond the realm of foundational cryptography.
Max ERC Funding
1 239 838 €
Duration
Start date: 2015-03-01, End date: 2021-02-28
Project acronym FTHPC
Project Fault Tolerant High Performance Computing
Researcher (PI) Oded Schwartz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Summary
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Max ERC Funding
1 824 467 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym FUN2MODEL
Project From FUnction-based TO MOdel-based automated probabilistic reasoning for DEep Learning
Researcher (PI) Marta KWIATKOWSKA
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2018-ADG
Summary Machine learning is revolutionising computer science and AI. Much of its success is due to deep neural networks, which have demonstrated outstanding performance in perception tasks such as image classification. Solutions based on deep learning are now being deployed in real-world systems, from virtual personal assistants to self-driving cars. Unfortunately, the black-box nature and instability of deep neural networks is raising concerns about the readiness of this technology. Efforts to address robustness of deep learning are emerging, but are limited to simple properties and function-based perception tasks that learn data associations. While perception is an essential feature of an artificial agent, achieving beneficial collaboration between human and artificial agents requires models of autonomy, inference, decision making, control and coordination that significantly go beyond perception. To address this challenge, this project will capitalise on recent breakthroughs by the PI and develop a model-based, probabilistic reasoning framework for autonomous agents with cognitive aspects, which supports reasoning about their decisions, agent interactions and inferences that capture cognitive information, in presence of uncertainty and partial observability. The objectives are to develop novel probabilistic verification and synthesis techniques to guarantee safety, robustness and fairness for complex decisions based on machine learning, formulate a comprehensive, compositional game-based modelling framework for reasoning about systems of autonomous agents and their interactions, and evaluate the techniques on a variety of case studies.
Addressing these challenges will require a fundamental shift towards Bayesian methods, and development of new, scalable, techniques, which differ from conventional probabilistic verification. If successful, the project will result in major advances in the quest towards provably robust and beneficial AI.
Summary
Machine learning is revolutionising computer science and AI. Much of its success is due to deep neural networks, which have demonstrated outstanding performance in perception tasks such as image classification. Solutions based on deep learning are now being deployed in real-world systems, from virtual personal assistants to self-driving cars. Unfortunately, the black-box nature and instability of deep neural networks is raising concerns about the readiness of this technology. Efforts to address robustness of deep learning are emerging, but are limited to simple properties and function-based perception tasks that learn data associations. While perception is an essential feature of an artificial agent, achieving beneficial collaboration between human and artificial agents requires models of autonomy, inference, decision making, control and coordination that significantly go beyond perception. To address this challenge, this project will capitalise on recent breakthroughs by the PI and develop a model-based, probabilistic reasoning framework for autonomous agents with cognitive aspects, which supports reasoning about their decisions, agent interactions and inferences that capture cognitive information, in presence of uncertainty and partial observability. The objectives are to develop novel probabilistic verification and synthesis techniques to guarantee safety, robustness and fairness for complex decisions based on machine learning, formulate a comprehensive, compositional game-based modelling framework for reasoning about systems of autonomous agents and their interactions, and evaluate the techniques on a variety of case studies.
Addressing these challenges will require a fundamental shift towards Bayesian methods, and development of new, scalable, techniques, which differ from conventional probabilistic verification. If successful, the project will result in major advances in the quest towards provably robust and beneficial AI.
Max ERC Funding
2 417 890 €
Duration
Start date: 2019-10-01, End date: 2024-09-30
Project acronym GEM
Project From Geometry to Motion: inverse modeling of complex mechanical structures
Researcher (PI) Florence Bertails-Descoubes
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary With the considerable advance of automatic image-based capture in Computer Vision and Computer Graphics these latest years, it becomes now affordable to acquire quickly and precisely the full 3D geometry of many mechanical objects featuring intricate shapes. Yet, while more and more geometrical data get collected and shared among the communities, there is currently very little study about how to infer the underlying mechanical properties of the captured objects merely from their geometrical configurations.
The GEM challenge consists in developing a non-invasive method for inferring the mechanical properties of complex objects from a minimal set of geometrical poses, in order to predict their dynamics. In contrast to classical inverse reconstruction methods, my proposal is built upon the claim that 1/ the mere geometrical shape of physical objects reveals a lot about their underlying mechanical properties and 2/ this property can be fully leveraged for a wide range of objects featuring rich geometrical configurations, such as slender structures subject to frictional contact (e.g., folded cloth or twined filaments).
To achieve this goal, we shall develop an original inverse modeling strategy based upon a/ the design of reduced and high-order discrete models for slender mechanical structures including rods, plates and shells, b/ a compact and well-posed mathematical formulation of our nonsmooth inverse problems, both in the static and dynamic cases, c/ the design of robust and efficient numerical tools for solving such complex problems, and d/ a thorough experimental validation of our methods relying on the most recent capturing tools.
In addition to significant advances in fast image-based measurement of diverse mechanical materials stemming from physics, biology, or manufacturing, this research is expected in the long run to ease considerably the design of physically realistic virtual worlds, as well as to boost the creation of dynamic human doubles.
Summary
With the considerable advance of automatic image-based capture in Computer Vision and Computer Graphics these latest years, it becomes now affordable to acquire quickly and precisely the full 3D geometry of many mechanical objects featuring intricate shapes. Yet, while more and more geometrical data get collected and shared among the communities, there is currently very little study about how to infer the underlying mechanical properties of the captured objects merely from their geometrical configurations.
The GEM challenge consists in developing a non-invasive method for inferring the mechanical properties of complex objects from a minimal set of geometrical poses, in order to predict their dynamics. In contrast to classical inverse reconstruction methods, my proposal is built upon the claim that 1/ the mere geometrical shape of physical objects reveals a lot about their underlying mechanical properties and 2/ this property can be fully leveraged for a wide range of objects featuring rich geometrical configurations, such as slender structures subject to frictional contact (e.g., folded cloth or twined filaments).
To achieve this goal, we shall develop an original inverse modeling strategy based upon a/ the design of reduced and high-order discrete models for slender mechanical structures including rods, plates and shells, b/ a compact and well-posed mathematical formulation of our nonsmooth inverse problems, both in the static and dynamic cases, c/ the design of robust and efficient numerical tools for solving such complex problems, and d/ a thorough experimental validation of our methods relying on the most recent capturing tools.
In addition to significant advances in fast image-based measurement of diverse mechanical materials stemming from physics, biology, or manufacturing, this research is expected in the long run to ease considerably the design of physically realistic virtual worlds, as well as to boost the creation of dynamic human doubles.
Max ERC Funding
1 498 570 €
Duration
Start date: 2015-09-01, End date: 2021-08-31
Project acronym GEMS
Project General Embedding Models for Spectroscopy
Researcher (PI) Chiara CAPPELLI
Host Institution (HI) SCUOLA NORMALE SUPERIORE
Call Details Consolidator Grant (CoG), PE4, ERC-2018-COG
Summary Recently, there has been a paradigmatic shift in experimental molecular spectroscopy, with new methods focusing on the study of molecules embedded within complex supramolecular/nanostructured aggregates. In the past, molecular spectroscopy has benefitted from the synergistic developments of accurate and cost-effective computational protocols for the simulation of a wide variety of spectroscopies. These methods, however, have been limited to isolated molecules or systems in solution, therefore are inadequate to describe the spectroscopy of complex nanostructured systems. The aim of GEMS is to bridge this gap, and to provide a coherent theoretical description and cost-effective computational tools for the simulation of spectra of molecules interacting with metal nano-particles, metal nanoaggregates and graphene sheets.
To this end, I will develop a novel frequency-dependent multilayer Quantum Mechanical (QM)/Molecular Mechanics (MM) embedding approach, general enough to be extendable to spectroscopic signals by using the machinery of quantum chemistry and able to treat any kind of plasmonic external environment by resorting to the same theoretical framework, but introducing its specificities through an accurate modelling and parametrization of the classical portion. The model will be interfaced with widely used computational chemistry software packages, so to maximize its use by the scientific community, and especially by non-specialists.
As pilot applications, GEMS will study the Surface-Enhanced Raman (SERS) spectra of systems that have found applications in the biosensor field, SERS of organic molecules in subnanometre junctions, enhanced infrared (IR) spectra of oligopeptides adsorbed on graphene, Graphene Enhanced Raman Scattering (GERS) of organic dyes, and the transmission of stereochemical response from a chiral analyte to an achiral molecule in the vicinity of a plasmon resonance of an achiral metallic nanostructure, as measured by Raman Optical Activity-ROA
Summary
Recently, there has been a paradigmatic shift in experimental molecular spectroscopy, with new methods focusing on the study of molecules embedded within complex supramolecular/nanostructured aggregates. In the past, molecular spectroscopy has benefitted from the synergistic developments of accurate and cost-effective computational protocols for the simulation of a wide variety of spectroscopies. These methods, however, have been limited to isolated molecules or systems in solution, therefore are inadequate to describe the spectroscopy of complex nanostructured systems. The aim of GEMS is to bridge this gap, and to provide a coherent theoretical description and cost-effective computational tools for the simulation of spectra of molecules interacting with metal nano-particles, metal nanoaggregates and graphene sheets.
To this end, I will develop a novel frequency-dependent multilayer Quantum Mechanical (QM)/Molecular Mechanics (MM) embedding approach, general enough to be extendable to spectroscopic signals by using the machinery of quantum chemistry and able to treat any kind of plasmonic external environment by resorting to the same theoretical framework, but introducing its specificities through an accurate modelling and parametrization of the classical portion. The model will be interfaced with widely used computational chemistry software packages, so to maximize its use by the scientific community, and especially by non-specialists.
As pilot applications, GEMS will study the Surface-Enhanced Raman (SERS) spectra of systems that have found applications in the biosensor field, SERS of organic molecules in subnanometre junctions, enhanced infrared (IR) spectra of oligopeptides adsorbed on graphene, Graphene Enhanced Raman Scattering (GERS) of organic dyes, and the transmission of stereochemical response from a chiral analyte to an achiral molecule in the vicinity of a plasmon resonance of an achiral metallic nanostructure, as measured by Raman Optical Activity-ROA
Max ERC Funding
1 609 500 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym GRACE
Project Resource Bounded Graph Query Answering
Researcher (PI) Wenfei Fan
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Advanced Grant (AdG), PE6, ERC-2014-ADG
Summary When we search for a product, can we find, using a single query, top choices ranked by Google and at the same time, recommended by our friends connected on Facebook? Is such a query tractable on the social graph of Facebook, which has over 1.31 billion nodes and 170 billion links? Is it feasible to evaluate such a query if we have bounded resources such as time and computing facilities? These questions are challenging: they demand a departure from the traditional query evaluation paradigm and from the classical computational complexity theory, and call for new resource-constrained methodologies to query big graphs.
This project aims to tackle precisely these challenges, from fundamental problems to practical techniques, using radically new approaches. We will develop a graph pattern query language that allows us to, e.g., unify Web search (via keywords) and social search (via graph patterns), and express graph pattern association rules for social media marketing. We will revise the conventional complexity theory to characterize the tractability of queries on big data, and formalize parallel scalability with the increase of processors. We will also develop algorithmic foundations and resource-constrained techniques for querying big graphs, by "making big data small". When exact answers are beyond reach in big graphs, we will develop data-driven and query-driven approximation schemes to strike a balance between the accuracy and cost. As a proof of the theory, we will develop GRACE, a system to answer graph pattern queries on big GRAphs within bounded resourCEs, based on the techniques developed. We envisage that the project will deliver methodological foundations and practical techniques for querying big graphs in general, and for improving search engines and social media marketing in particular. A breakthrough in this subject will advance several fields, including databases, theory of computation, parallel computation and social data analysis.
Summary
When we search for a product, can we find, using a single query, top choices ranked by Google and at the same time, recommended by our friends connected on Facebook? Is such a query tractable on the social graph of Facebook, which has over 1.31 billion nodes and 170 billion links? Is it feasible to evaluate such a query if we have bounded resources such as time and computing facilities? These questions are challenging: they demand a departure from the traditional query evaluation paradigm and from the classical computational complexity theory, and call for new resource-constrained methodologies to query big graphs.
This project aims to tackle precisely these challenges, from fundamental problems to practical techniques, using radically new approaches. We will develop a graph pattern query language that allows us to, e.g., unify Web search (via keywords) and social search (via graph patterns), and express graph pattern association rules for social media marketing. We will revise the conventional complexity theory to characterize the tractability of queries on big data, and formalize parallel scalability with the increase of processors. We will also develop algorithmic foundations and resource-constrained techniques for querying big graphs, by "making big data small". When exact answers are beyond reach in big graphs, we will develop data-driven and query-driven approximation schemes to strike a balance between the accuracy and cost. As a proof of the theory, we will develop GRACE, a system to answer graph pattern queries on big GRAphs within bounded resourCEs, based on the techniques developed. We envisage that the project will deliver methodological foundations and practical techniques for querying big graphs in general, and for improving search engines and social media marketing in particular. A breakthrough in this subject will advance several fields, including databases, theory of computation, parallel computation and social data analysis.
Max ERC Funding
2 171 524 €
Duration
Start date: 2015-11-01, End date: 2020-10-31
Project acronym GraM3
Project Surface-grafted metallofullerene molecular magnets with controllable alignment of magnetic moments
Researcher (PI) Alexey Alexandrovich Popov
Host Institution (HI) LEIBNIZ-INSTITUT FUER FESTKOERPER- UND WERKSTOFFFORSCHUNG DRESDEN E.V.
Call Details Consolidator Grant (CoG), PE4, ERC-2014-CoG
Summary The molecules retaining their magnetization in the absence of magnetic field are known as single molecule magnets (SMMs). Important problems to be solved on the way to the applications of SMMs in molecular spintronics is their deposition on surfaces and addressing their spins on the single molecular level. In this project we will address these problems by designing SMMs based on endohedral metallofullerenes (EMFs) derivatized with anchoring groups. SMM behaviour recently discovered for DySc2N@C80 and Dy2ScN@C80 in PI’s group is governed by a strong magnetic anisotropy (magnetic moments of Dy ions are aligned along the Dy–N bonds) and ferromagnetic exchange interactions between Dy ions within the clusters. Protected by the carbon cages, these SMMs exhibit uniquely long zero-field relaxation times of several hours at 2 K and provide an ideal system for addressing the individual spin states. Spatial orientation of magnetic moments in EMF-SMMs is determined by the endohedral cluster and is therefore influenced by the orientation of the EMFs molecules and their internal dynamics. We will apply three strategies to control the spatial arrangement of the magnetic moments in EMF-SMMs: (i) deposition of EMF molecules via sublimation; (ii) exohedral modification of EMFs with anchoring groups for grafting of EMFs on surfaces; (iii) introducing photoswitchable units into the anchoring groups which can reversibly change their geometry upon impact of light and will allow switching direction of the magnetic moment in a fully controllable way. Magnetic behaviour of the surface-grafted SMMs will be studied by bulk- and surface-sensitive techniques including X-ray magnetic circular dichroism and especially spin-polarized scanning tunneling microscopy. Successful fulfillment of the objectives of this interdisciplinary high-risk/high-gain project will revolutionize the field of the surface molecular magnetism by allowing the study and control of the SMMs on a single spin level.
Summary
The molecules retaining their magnetization in the absence of magnetic field are known as single molecule magnets (SMMs). Important problems to be solved on the way to the applications of SMMs in molecular spintronics is their deposition on surfaces and addressing their spins on the single molecular level. In this project we will address these problems by designing SMMs based on endohedral metallofullerenes (EMFs) derivatized with anchoring groups. SMM behaviour recently discovered for DySc2N@C80 and Dy2ScN@C80 in PI’s group is governed by a strong magnetic anisotropy (magnetic moments of Dy ions are aligned along the Dy–N bonds) and ferromagnetic exchange interactions between Dy ions within the clusters. Protected by the carbon cages, these SMMs exhibit uniquely long zero-field relaxation times of several hours at 2 K and provide an ideal system for addressing the individual spin states. Spatial orientation of magnetic moments in EMF-SMMs is determined by the endohedral cluster and is therefore influenced by the orientation of the EMFs molecules and their internal dynamics. We will apply three strategies to control the spatial arrangement of the magnetic moments in EMF-SMMs: (i) deposition of EMF molecules via sublimation; (ii) exohedral modification of EMFs with anchoring groups for grafting of EMFs on surfaces; (iii) introducing photoswitchable units into the anchoring groups which can reversibly change their geometry upon impact of light and will allow switching direction of the magnetic moment in a fully controllable way. Magnetic behaviour of the surface-grafted SMMs will be studied by bulk- and surface-sensitive techniques including X-ray magnetic circular dichroism and especially spin-polarized scanning tunneling microscopy. Successful fulfillment of the objectives of this interdisciplinary high-risk/high-gain project will revolutionize the field of the surface molecular magnetism by allowing the study and control of the SMMs on a single spin level.
Max ERC Funding
1 912 181 €
Duration
Start date: 2015-06-01, End date: 2020-05-31