Project acronym ACCORD
Project Algorithms for Complex Collective Decisions on Structured Domains
Researcher (PI) Edith Elkind
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Algorithms for Complex Collective Decisions on Structured Domains.
The aim of this proposal is to substantially advance the field of Computational Social Choice, by developing new tools and methodologies that can be used for making complex group decisions in rich and structured environments. We consider settings where each member of a decision-making body has preferences over a finite set of alternatives, and the goal is to synthesise a collective preference over these alternatives, which may take the form of a partial order over the set of alternatives with a predefined structure: examples include selecting a fixed-size set of alternatives, a ranking of the alternatives, a winner and up to two runner-ups, etc. We will formulate desiderata that apply to such preference aggregation procedures, design specific procedures that satisfy as many of these desiderata as possible, and develop efficient algorithms for computing them. As the latter step may be infeasible on general preference domains, we will focus on identifying the least restrictive domains that enable efficient computation, and use real-life preference data to verify whether the associated restrictions are likely to be satisfied in realistic preference aggregation scenarios. Also, we will determine whether our preference aggregation procedures are computationally resistant to malicious behavior. To lower the cognitive burden on the decision-makers, we will extend our procedures to accept partial rankings as inputs. Finally, to further contribute towards bridging the gap between theory and practice of collective decision making, we will provide open-source software implementations of our procedures, and reach out to the potential users to obtain feedback on their practical applicability.
Summary
Algorithms for Complex Collective Decisions on Structured Domains.
The aim of this proposal is to substantially advance the field of Computational Social Choice, by developing new tools and methodologies that can be used for making complex group decisions in rich and structured environments. We consider settings where each member of a decision-making body has preferences over a finite set of alternatives, and the goal is to synthesise a collective preference over these alternatives, which may take the form of a partial order over the set of alternatives with a predefined structure: examples include selecting a fixed-size set of alternatives, a ranking of the alternatives, a winner and up to two runner-ups, etc. We will formulate desiderata that apply to such preference aggregation procedures, design specific procedures that satisfy as many of these desiderata as possible, and develop efficient algorithms for computing them. As the latter step may be infeasible on general preference domains, we will focus on identifying the least restrictive domains that enable efficient computation, and use real-life preference data to verify whether the associated restrictions are likely to be satisfied in realistic preference aggregation scenarios. Also, we will determine whether our preference aggregation procedures are computationally resistant to malicious behavior. To lower the cognitive burden on the decision-makers, we will extend our procedures to accept partial rankings as inputs. Finally, to further contribute towards bridging the gap between theory and practice of collective decision making, we will provide open-source software implementations of our procedures, and reach out to the potential users to obtain feedback on their practical applicability.
Max ERC Funding
1 395 933 €
Duration
Start date: 2015-07-01, End date: 2020-06-30
Project acronym ACOULOMODE
Project Advanced coupling of low order combustor simulations with thermoacoustic modelling and controller design
Researcher (PI) Aimee Morgans
Host Institution (HI) IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary "Combustion is essential to the world’s energy generation and transport needs, and will remain so for the foreseeable future. Mitigating its impact on the climate and human health, by reducing its associated emissions, is thus a priority. One significant challenge for gas-turbine combustion is combustion instability, which is currently inhibiting reductions in NOx emissions (these damage human health via a deterioration in air quality). Combustion instability is caused by a two-way coupling between unsteady combustion and acoustic waves - the large pressure oscillations that result can cause substantial mechanical damage. Currently, the lack of fast, accurate modelling tools for combustion instability, and the lack of reliable ways of suppressing it are severely hindering reductions in NOx emissions.
This proposal aims to make step improvements in both fast, accurate modelling of combustion instability, and in developing reliable active control strategies for its suppression. It will achieve this by coupling low order combustor models (these are fast, simplified models for simulating combustion instability) with advances in analytical modelling, CFD simulation, reduced order modelling and control theory tools. In particular:
* important advances in accurately incorporating the effect of entropy waves (temperature variations resulting from unsteady combustion) and non-linear flame models will be made;
* new active control strategies for achieving reliable suppression of combustion instability, including from within limit cycle oscillations, will be developed;
* an open-source low order combustor modelling tool will be developed and widely disseminated, opening access to researchers worldwide and improving communications between the fields of thermoacoustics and control theory.
Thus the proposal aims to use analytical and computational methods to contribute to achieving low NOx gas-turbine combustion, without the penalty of damaging combustion instability."
Summary
"Combustion is essential to the world’s energy generation and transport needs, and will remain so for the foreseeable future. Mitigating its impact on the climate and human health, by reducing its associated emissions, is thus a priority. One significant challenge for gas-turbine combustion is combustion instability, which is currently inhibiting reductions in NOx emissions (these damage human health via a deterioration in air quality). Combustion instability is caused by a two-way coupling between unsteady combustion and acoustic waves - the large pressure oscillations that result can cause substantial mechanical damage. Currently, the lack of fast, accurate modelling tools for combustion instability, and the lack of reliable ways of suppressing it are severely hindering reductions in NOx emissions.
This proposal aims to make step improvements in both fast, accurate modelling of combustion instability, and in developing reliable active control strategies for its suppression. It will achieve this by coupling low order combustor models (these are fast, simplified models for simulating combustion instability) with advances in analytical modelling, CFD simulation, reduced order modelling and control theory tools. In particular:
* important advances in accurately incorporating the effect of entropy waves (temperature variations resulting from unsteady combustion) and non-linear flame models will be made;
* new active control strategies for achieving reliable suppression of combustion instability, including from within limit cycle oscillations, will be developed;
* an open-source low order combustor modelling tool will be developed and widely disseminated, opening access to researchers worldwide and improving communications between the fields of thermoacoustics and control theory.
Thus the proposal aims to use analytical and computational methods to contribute to achieving low NOx gas-turbine combustion, without the penalty of damaging combustion instability."
Max ERC Funding
1 489 309 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym AFFINITY
Project Actuation of Ferromagnetic Fibre Networks to improve Implant Longevity
Researcher (PI) Athina Markaki
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), PE8, ERC-2009-StG
Summary This proposal is for an exploratory study into a radical new approach to the problem of orthopaedic implant loosening. Such loosening commonly occurs because the joint between the implant and the surrounding bone is insufficiently strong and durable. It is a serious problem both for implants cemented to the bone and for those dependent on bone in-growth into a rough/porous implant surface. In the latter case, the main problem is commonly that bone in-growth is insufficiently rapid or deep for a strong bond to be established. The idea proposed in this work is that the implant should have a highly porous surface layer, made by bonding ferromagnetic fibres together, into which bone tissue growth would occur. During the post-operative period, application of a magnetic field will cause the fibre network to deform elastically, as individual fibres tend to align with the field. This will impose strains on the bone tissue as it grows into the fibre network. Such mechanical deformation is known to be highly beneficial in promoting bone growth, providing the associated strain lies in a certain range (~0.1%). Preliminary work, involving both model development and experimental studies on the effect of magnetic fields on fibre networks, has suggested that beneficial therapeutic effects can be induced using field strengths no greater than those already employed for diagnostic purposes. A comprehensive 5-year, highly inter-disciplinary programme is planned, encompassing processing, network architecture characterisation, magneto-mechanical response investigations, various modelling activities and systematic in vitro experimentation to establish whether magneto-mechanical Actuation of Ferromagnetic Fibre Networks shows promise as a new therapeutic approach to improve implant longevity.
Summary
This proposal is for an exploratory study into a radical new approach to the problem of orthopaedic implant loosening. Such loosening commonly occurs because the joint between the implant and the surrounding bone is insufficiently strong and durable. It is a serious problem both for implants cemented to the bone and for those dependent on bone in-growth into a rough/porous implant surface. In the latter case, the main problem is commonly that bone in-growth is insufficiently rapid or deep for a strong bond to be established. The idea proposed in this work is that the implant should have a highly porous surface layer, made by bonding ferromagnetic fibres together, into which bone tissue growth would occur. During the post-operative period, application of a magnetic field will cause the fibre network to deform elastically, as individual fibres tend to align with the field. This will impose strains on the bone tissue as it grows into the fibre network. Such mechanical deformation is known to be highly beneficial in promoting bone growth, providing the associated strain lies in a certain range (~0.1%). Preliminary work, involving both model development and experimental studies on the effect of magnetic fields on fibre networks, has suggested that beneficial therapeutic effects can be induced using field strengths no greater than those already employed for diagnostic purposes. A comprehensive 5-year, highly inter-disciplinary programme is planned, encompassing processing, network architecture characterisation, magneto-mechanical response investigations, various modelling activities and systematic in vitro experimentation to establish whether magneto-mechanical Actuation of Ferromagnetic Fibre Networks shows promise as a new therapeutic approach to improve implant longevity.
Max ERC Funding
1 442 756 €
Duration
Start date: 2010-01-01, End date: 2015-11-30
Project acronym ALORS
Project Advanced Lagrangian Optimization, Receptivity and Sensitivity analysis applied to industrial situations
Researcher (PI) Matthew Pudan Juniper
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), PE8, ERC-2010-StG_20091028
Summary In the last ten years there has been a surge of interest in non-modal analysis applied to canonical problems in fundamental fluid mechanics. Even in simple flows, the stability behaviour predicted by non-modal analysis can be completely different from and far more accurate than that predicted by conventional eigenvalue analysis.
As well as being more accurate, the tools of non-modal analysis, such as Lagrangian optimization, are very versatile. Furthermore, the outputs, such as receptivity and sensitivity maps of a flow, provide powerful insight for engineers. They describe where a flow is most receptive to forcing or where the flow is most sensitive to modification.
The application of non-modal analysis to canonical problems has set the scene for step changes in engineering practice in fluid mechanics and thermoacoustics. The technical objectives of this proposal are to apply non-modal analysis to high Reynolds number flows, reacting flows and thermoacoustic systems, to compare theoretical predictions with experimental measurements and to embed these techniques within an industrial design tool that has already been developed by the group.
This research group s vision is that future generations of engineering CFD tools will contain modules that can perform non-modal analysis. The generalized approach proposed here, combined with challenging scientific and engineering examples that are backed up by experimental evidence, will make this possible and demonstrate it to a wider engineering community.
Summary
In the last ten years there has been a surge of interest in non-modal analysis applied to canonical problems in fundamental fluid mechanics. Even in simple flows, the stability behaviour predicted by non-modal analysis can be completely different from and far more accurate than that predicted by conventional eigenvalue analysis.
As well as being more accurate, the tools of non-modal analysis, such as Lagrangian optimization, are very versatile. Furthermore, the outputs, such as receptivity and sensitivity maps of a flow, provide powerful insight for engineers. They describe where a flow is most receptive to forcing or where the flow is most sensitive to modification.
The application of non-modal analysis to canonical problems has set the scene for step changes in engineering practice in fluid mechanics and thermoacoustics. The technical objectives of this proposal are to apply non-modal analysis to high Reynolds number flows, reacting flows and thermoacoustic systems, to compare theoretical predictions with experimental measurements and to embed these techniques within an industrial design tool that has already been developed by the group.
This research group s vision is that future generations of engineering CFD tools will contain modules that can perform non-modal analysis. The generalized approach proposed here, combined with challenging scientific and engineering examples that are backed up by experimental evidence, will make this possible and demonstrate it to a wider engineering community.
Max ERC Funding
1 301 196 €
Duration
Start date: 2010-12-01, End date: 2016-06-30
Project acronym AMD
Project Algorithmic Mechanism Design: Beyond Truthful Mechanisms
Researcher (PI) Michal Feldman
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2013-StG
Summary "The first decade of Algorithmic Mechanism Design (AMD) concentrated, very successfully, on the design of truthful mechanisms for the allocation of resources among agents with private preferences.
Truthful mechanisms are ones that incentivize rational users to report their preferences truthfully.
Truthfulness, however, for all its theoretical appeal, suffers from several inherent limitations, mainly its high communication and computation complexities.
It is not surprising, therefore, that practical applications forego truthfulness and use simpler mechanisms instead.
Simplicity in itself, however, is not sufficient, as any meaningful mechanism should also have some notion of fairness; otherwise agents will stop using it over time.
In this project I plan to develop an innovative AMD theoretical framework that will go beyond truthfulness and focus instead on the natural themes of simplicity and fairness, in addition to computational tractability.
One of my primary goals will be the design of simple and fair poly-time mechanisms that perform at near optimal levels with respect to important economic objectives such as social welfare and revenue.
To this end, I will work toward providing precise definitions of simplicity and fairness and quantifying the effects of these restrictions on the performance levels that can be obtained.
A major challenge in the evaluation of non-truthful mechanisms is defining a reasonable behavior model that will enable their evaluation.
The success of this project could have a broad impact on Europe and beyond, as it would guide the design of natural mechanisms for markets of tens of billions of dollars in revenue, such as online advertising, or sales of wireless frequencies.
The timing of this project is ideal, as the AMD field is now sufficiently mature to lead to a breakthrough and at the same time young enough to be receptive to new approaches and themes."
Summary
"The first decade of Algorithmic Mechanism Design (AMD) concentrated, very successfully, on the design of truthful mechanisms for the allocation of resources among agents with private preferences.
Truthful mechanisms are ones that incentivize rational users to report their preferences truthfully.
Truthfulness, however, for all its theoretical appeal, suffers from several inherent limitations, mainly its high communication and computation complexities.
It is not surprising, therefore, that practical applications forego truthfulness and use simpler mechanisms instead.
Simplicity in itself, however, is not sufficient, as any meaningful mechanism should also have some notion of fairness; otherwise agents will stop using it over time.
In this project I plan to develop an innovative AMD theoretical framework that will go beyond truthfulness and focus instead on the natural themes of simplicity and fairness, in addition to computational tractability.
One of my primary goals will be the design of simple and fair poly-time mechanisms that perform at near optimal levels with respect to important economic objectives such as social welfare and revenue.
To this end, I will work toward providing precise definitions of simplicity and fairness and quantifying the effects of these restrictions on the performance levels that can be obtained.
A major challenge in the evaluation of non-truthful mechanisms is defining a reasonable behavior model that will enable their evaluation.
The success of this project could have a broad impact on Europe and beyond, as it would guide the design of natural mechanisms for markets of tens of billions of dollars in revenue, such as online advertising, or sales of wireless frequencies.
The timing of this project is ideal, as the AMD field is now sufficiently mature to lead to a breakthrough and at the same time young enough to be receptive to new approaches and themes."
Max ERC Funding
1 394 600 €
Duration
Start date: 2013-11-01, End date: 2018-10-31
Project acronym ANGLE
Project Accelerated design and discovery of novel molecular materials via global lattice energy minimisation
Researcher (PI) Graeme Matthew Day
Host Institution (HI) UNIVERSITY OF SOUTHAMPTON
Call Details Starting Grant (StG), PE4, ERC-2012-StG_20111012
Summary The goal of crystal engineering is the design of functional crystalline materials in which the arrangement of basic structural building blocks imparts desired properties. The engineering of organic molecular crystals has, to date, relied largely on empirical rules governing the intermolecular association of functional groups in the solid state. However, many materials properties depend intricately on the complete crystal structure, i.e. the unit cell, space group and atomic positions, which cannot be predicted solely using such rules. Therefore, the development of computational methods for crystal structure prediction (CSP) from first principles has been a goal of computational chemistry that could significantly accelerate the design of new materials. It is only recently that the necessary advances in the modelling of intermolecular interactions and developments in algorithms for identifying all relevant crystal structures have come together to provide predictive methods that are becoming reliable and affordable on a timescale that could usefully complement an experimental research programme. The principle aim of the proposed work is to establish the use of state-of-the-art crystal structure prediction methods as a means of guiding the discovery and design of novel molecular materials.
This research proposal both continues the development of the computational methods for CSP and, by developing a computational framework for screening of potential molecules, develops the application of these methods for materials design. The areas on which we will focus are organic molecular semiconductors with high charge carrier mobilities and, building on our recently published results in Nature [1], the development of porous organic molecular materials. The project will both deliver novel materials, as well as improvements in the reliability of computational methods that will find widespread applications in materials chemistry.
[1] Nature 2011, 474, 367-371.
Summary
The goal of crystal engineering is the design of functional crystalline materials in which the arrangement of basic structural building blocks imparts desired properties. The engineering of organic molecular crystals has, to date, relied largely on empirical rules governing the intermolecular association of functional groups in the solid state. However, many materials properties depend intricately on the complete crystal structure, i.e. the unit cell, space group and atomic positions, which cannot be predicted solely using such rules. Therefore, the development of computational methods for crystal structure prediction (CSP) from first principles has been a goal of computational chemistry that could significantly accelerate the design of new materials. It is only recently that the necessary advances in the modelling of intermolecular interactions and developments in algorithms for identifying all relevant crystal structures have come together to provide predictive methods that are becoming reliable and affordable on a timescale that could usefully complement an experimental research programme. The principle aim of the proposed work is to establish the use of state-of-the-art crystal structure prediction methods as a means of guiding the discovery and design of novel molecular materials.
This research proposal both continues the development of the computational methods for CSP and, by developing a computational framework for screening of potential molecules, develops the application of these methods for materials design. The areas on which we will focus are organic molecular semiconductors with high charge carrier mobilities and, building on our recently published results in Nature [1], the development of porous organic molecular materials. The project will both deliver novel materials, as well as improvements in the reliability of computational methods that will find widespread applications in materials chemistry.
[1] Nature 2011, 474, 367-371.
Max ERC Funding
1 499 906 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym ATOMICAR
Project ATOMic Insight Cavity Array Reactor
Researcher (PI) Peter Christian Kjærgaard VESBORG
Host Institution (HI) DANMARKS TEKNISKE UNIVERSITET
Call Details Starting Grant (StG), PE4, ERC-2017-STG
Summary The goal of ATOMICAR is to achieve the ultimate sensitivity limit in heterogeneous catalysis:
Quantitative measurement of chemical turnover on a single catalytic nanoparticle.
Most heterogeneous catalysis occurs on metal nanoparticle in the size range of 3 nm - 10 nm. Model studies have established that there is often a strong coupling between nanoparticle size & shape - and catalytic activity. The strong structure-activity coupling renders it probable that “super-active” nanoparticles exist. However, since there is no way to measure catalytic activity of less than ca 1 million nanoparticles at a time, any super-activity will always be hidden by “ensemble smearing” since one million nanoparticles of exactly identical size and shape cannot be made. The state-of-the-art in catalysis benchmarking is microfabricated flow reactors with mass-spectrometric detection, but the sensitivity of this approach cannot be incrementally improved by six orders of magnitude. This calls for a new measurement paradigm where the activity of a single nanoparticle can be benchmarked – the ultimate limit for catalytic measurement.
A tiny batch reactor is the solution, but there are three key problems: How to seal it; how to track catalytic turnover inside it; and how to see the nanoparticle inside it? Graphene solves all three problems: A microfabricated cavity with a thin SixNy bottom window, a single catalytic nanoparticle inside, and a graphene seal forms a gas tight batch reactor since graphene has zero gas permeability. Catalysis is then tracked as an internal pressure change via the stress & deflection of the graphene seal. Crucially, the electron-transparency of graphene and SixNy enables subsequent transmission electron microscope access with atomic resolution so that active nanoparticles can be studied in full detail.
ATOMICAR will re-define the experimental limits of catalyst benchmarking and lift the field of basic catalysis research into the single-nanoparticle age.
Summary
The goal of ATOMICAR is to achieve the ultimate sensitivity limit in heterogeneous catalysis:
Quantitative measurement of chemical turnover on a single catalytic nanoparticle.
Most heterogeneous catalysis occurs on metal nanoparticle in the size range of 3 nm - 10 nm. Model studies have established that there is often a strong coupling between nanoparticle size & shape - and catalytic activity. The strong structure-activity coupling renders it probable that “super-active” nanoparticles exist. However, since there is no way to measure catalytic activity of less than ca 1 million nanoparticles at a time, any super-activity will always be hidden by “ensemble smearing” since one million nanoparticles of exactly identical size and shape cannot be made. The state-of-the-art in catalysis benchmarking is microfabricated flow reactors with mass-spectrometric detection, but the sensitivity of this approach cannot be incrementally improved by six orders of magnitude. This calls for a new measurement paradigm where the activity of a single nanoparticle can be benchmarked – the ultimate limit for catalytic measurement.
A tiny batch reactor is the solution, but there are three key problems: How to seal it; how to track catalytic turnover inside it; and how to see the nanoparticle inside it? Graphene solves all three problems: A microfabricated cavity with a thin SixNy bottom window, a single catalytic nanoparticle inside, and a graphene seal forms a gas tight batch reactor since graphene has zero gas permeability. Catalysis is then tracked as an internal pressure change via the stress & deflection of the graphene seal. Crucially, the electron-transparency of graphene and SixNy enables subsequent transmission electron microscope access with atomic resolution so that active nanoparticles can be studied in full detail.
ATOMICAR will re-define the experimental limits of catalyst benchmarking and lift the field of basic catalysis research into the single-nanoparticle age.
Max ERC Funding
1 496 000 €
Duration
Start date: 2018-02-01, End date: 2023-01-31
Project acronym BACKTOBACK
Project Engineering Solutions for Back Pain: Simulation of Patient Variance
Researcher (PI) Ruth Wilcox
Host Institution (HI) UNIVERSITY OF LEEDS
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Back pain affects eight out of ten adults during their lifetime. It a huge economic burden on society, estimated to cost as much as 1-2% of gross national product in several European countries. Treatments for back pain have lower levels of success and are not as technologically mature as those for other musculoskeletal disorders such as hip and knee replacement. This application proposes to tackle one of the major barriers to the development of better surgical treatments for back pain.
At present, new spinal devices are commonly assessed in isolation in the laboratory under standardised conditions that do not represent the variation across the patient population. Consequently many interventions have failed during clinical trials or have proved to have poor long term success rates.
Using a combination of computational and experimental models, a new testing methodology will be developed that will enable the variation between patients to be simulated for the first time. This will enable spinal implants and therapies to be more robustly evaluated across a virtual patient population prior to clinical trial. The tools developed will be used in collaboration with clinicians and basic scientists to develop and, crucially, optimise new treatments that reduce back pain whilst preserving the unique functions of the spine.
If successful, this approach could be translated to evaluate and optimise emerging minimally invasive treatments in other joints such as the hip and knee. Research in the spine could then, for the first time, lead rather than follow that undertaken in other branches of orthopaedics.
Summary
Back pain affects eight out of ten adults during their lifetime. It a huge economic burden on society, estimated to cost as much as 1-2% of gross national product in several European countries. Treatments for back pain have lower levels of success and are not as technologically mature as those for other musculoskeletal disorders such as hip and knee replacement. This application proposes to tackle one of the major barriers to the development of better surgical treatments for back pain.
At present, new spinal devices are commonly assessed in isolation in the laboratory under standardised conditions that do not represent the variation across the patient population. Consequently many interventions have failed during clinical trials or have proved to have poor long term success rates.
Using a combination of computational and experimental models, a new testing methodology will be developed that will enable the variation between patients to be simulated for the first time. This will enable spinal implants and therapies to be more robustly evaluated across a virtual patient population prior to clinical trial. The tools developed will be used in collaboration with clinicians and basic scientists to develop and, crucially, optimise new treatments that reduce back pain whilst preserving the unique functions of the spine.
If successful, this approach could be translated to evaluate and optimise emerging minimally invasive treatments in other joints such as the hip and knee. Research in the spine could then, for the first time, lead rather than follow that undertaken in other branches of orthopaedics.
Max ERC Funding
1 498 777 €
Duration
Start date: 2012-12-01, End date: 2018-11-30
Project acronym BANDWIDTH
Project The cost of limited communication bandwidth in distributed computing
Researcher (PI) Keren CENSOR-HILLEL
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Distributed systems underlie many modern technologies, a prime example being the Internet. The ever-increasing abundance of distributed systems necessitates their design and usage to be backed by strong theoretical foundations.
A major challenge that distributed systems face is the lack of a central authority, which brings many aspects of uncertainty into the environment, in the form of unknown network topology or unpredictable dynamic behavior. A practical restriction of distributed systems, which is at the heart of this proposal, is the limited bandwidth available for communication between the network components.
A central family of distributed tasks is that of local tasks, which are informally described as tasks which are possible to solve by sending information through only a relatively small number of hops. A cornerstone example is the need to break symmetry and provide a better utilization of resources, which can be obtained by the task of producing a valid coloring of the nodes given some small number of colors. Amazingly, there are still huge gaps between the known upper and lower bounds for the complexity of many local tasks. This holds even if one allows powerful assumptions of unlimited bandwidth. While some known algorithms indeed use small messages, the complexity gaps are even larger compared to the unlimited bandwidth case. This is not a mere coincidence, and in fact the existing theoretical infrastructure is provably incapable of
giving stronger lower bounds for many local tasks under limited bandwidth.
This proposal zooms in on this crucial blind spot in the current literature on the theory of distributed computing, namely, the study of local tasks under limited bandwidth. The goal of this research is to produce fast algorithms for fundamental distributed local tasks under restricted bandwidth, as well as understand their limitations by providing lower bounds.
Summary
Distributed systems underlie many modern technologies, a prime example being the Internet. The ever-increasing abundance of distributed systems necessitates their design and usage to be backed by strong theoretical foundations.
A major challenge that distributed systems face is the lack of a central authority, which brings many aspects of uncertainty into the environment, in the form of unknown network topology or unpredictable dynamic behavior. A practical restriction of distributed systems, which is at the heart of this proposal, is the limited bandwidth available for communication between the network components.
A central family of distributed tasks is that of local tasks, which are informally described as tasks which are possible to solve by sending information through only a relatively small number of hops. A cornerstone example is the need to break symmetry and provide a better utilization of resources, which can be obtained by the task of producing a valid coloring of the nodes given some small number of colors. Amazingly, there are still huge gaps between the known upper and lower bounds for the complexity of many local tasks. This holds even if one allows powerful assumptions of unlimited bandwidth. While some known algorithms indeed use small messages, the complexity gaps are even larger compared to the unlimited bandwidth case. This is not a mere coincidence, and in fact the existing theoretical infrastructure is provably incapable of
giving stronger lower bounds for many local tasks under limited bandwidth.
This proposal zooms in on this crucial blind spot in the current literature on the theory of distributed computing, namely, the study of local tasks under limited bandwidth. The goal of this research is to produce fast algorithms for fundamental distributed local tasks under restricted bandwidth, as well as understand their limitations by providing lower bounds.
Max ERC Funding
1 486 480 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym BCOOL
Project Barocaloric materials for energy-efficient solid-state cooling
Researcher (PI) Javier Eduardo Moya Raposo
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), PE8, ERC-2015-STG
Summary Cooling is essential for food and drinks, medicine, electronics and thermal comfort. Thermal changes due to pressure-driven phase transitions in fluids have long been used in vapour compression systems to achieve continuous refrigeration and air conditioning, but their energy efficiency is relatively low, and the working fluids that are employed harm the environment when released to the atmosphere. More recently, the discovery of large thermal changes due to pressure-driven phase transitions in magnetic solids has led to suggestions for environmentally friendly solid-state cooling applications. However, for this new cooling technology to succeed, it is still necessary to find suitable barocaloric (BC) materials that satisfy the demanding requirements set by applications, namely very large thermal changes in inexpensive materials that occur near room temperature in response to small applied pressures.
I aim to develop new BC materials by exploiting phase transitions in non-magnetic solids whose structural and thermal properties are strongly coupled, namely ferroelectric salts, molecular crystals and hybrid materials. These materials are normally made from cheap abundant elements, and display very large latent heats and volume changes at structural phase transitions, which make them ideal candidates to exhibit extremely large BC effects that outperform those observed in state-of-the-art BC magnetic materials, and that match applications.
My unique approach combines: i) materials science to identify materials with outstanding BC performance, ii) advanced experimental techniques to explore and exploit these novel materials, iii) materials engineering to create new composite materials with enhanced BC properties, and iv) fabrication of BC devices, using insight gained from modelling of materials and device parameters. If successful, my ambitious strategy will culminate in revolutionary solid-state cooling devices that are environmentally friendly and energy efficient.
Summary
Cooling is essential for food and drinks, medicine, electronics and thermal comfort. Thermal changes due to pressure-driven phase transitions in fluids have long been used in vapour compression systems to achieve continuous refrigeration and air conditioning, but their energy efficiency is relatively low, and the working fluids that are employed harm the environment when released to the atmosphere. More recently, the discovery of large thermal changes due to pressure-driven phase transitions in magnetic solids has led to suggestions for environmentally friendly solid-state cooling applications. However, for this new cooling technology to succeed, it is still necessary to find suitable barocaloric (BC) materials that satisfy the demanding requirements set by applications, namely very large thermal changes in inexpensive materials that occur near room temperature in response to small applied pressures.
I aim to develop new BC materials by exploiting phase transitions in non-magnetic solids whose structural and thermal properties are strongly coupled, namely ferroelectric salts, molecular crystals and hybrid materials. These materials are normally made from cheap abundant elements, and display very large latent heats and volume changes at structural phase transitions, which make them ideal candidates to exhibit extremely large BC effects that outperform those observed in state-of-the-art BC magnetic materials, and that match applications.
My unique approach combines: i) materials science to identify materials with outstanding BC performance, ii) advanced experimental techniques to explore and exploit these novel materials, iii) materials engineering to create new composite materials with enhanced BC properties, and iv) fabrication of BC devices, using insight gained from modelling of materials and device parameters. If successful, my ambitious strategy will culminate in revolutionary solid-state cooling devices that are environmentally friendly and energy efficient.
Max ERC Funding
1 467 521 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym BeadsOnString
Project Beads on String Genomics: Experimental Toolbox for Unmasking Genetic / Epigenetic Variation in Genomic DNA and Chromatin
Researcher (PI) Yuval Ebenstein
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE4, ERC-2013-StG
Summary Next generation sequencing (NGS) is revolutionizing all fields of biological research but it fails to extract the full range of information associated with genetic material and is lacking in its ability to resolve variations between genomes. The high degree of genome variation exhibited both on the population level as well as between genetically “identical” cells (even in the same organ) makes genetic and epigenetic analysis on the single cell and single genome level a necessity.
Chromosomes may be conceptually represented as a linear one-dimensional barcode. However, in contrast to a traditional binary barcode approach that considers only two possible bits of information (1 & 0), I will use colour and molecular structure to expand the variety of information represented in the barcode. Like colourful beads threaded on a string, where each bead represents a distinct type of observable, I will label each type of genomic information with a different chemical moiety thus expanding the repertoire of information that can be simultaneously measured. A major effort in this proposal is invested in the development of unique chemistries to enable this labelling.
I specifically address three types of genomic variation: Variations in genomic layout (including DNA repeats, structural and copy number variations), variations in the patterns of chemical DNA modifications (such as methylation of cytosine bases) and variations in the chromatin composition (including nucleosome and transcription factor distributions). I will use physical extension of long DNA molecules on surfaces and in nanofluidic channels to reveal this information visually in the form of a linear, fluorescent “barcode” that is read-out by advanced imaging techniques. Similarly, DNA molecules will be threaded through a nanopore where the sequential position of “bulky” molecular groups attached to the DNA may be inferred from temporal modulation of an ionic current measured across the pore.
Summary
Next generation sequencing (NGS) is revolutionizing all fields of biological research but it fails to extract the full range of information associated with genetic material and is lacking in its ability to resolve variations between genomes. The high degree of genome variation exhibited both on the population level as well as between genetically “identical” cells (even in the same organ) makes genetic and epigenetic analysis on the single cell and single genome level a necessity.
Chromosomes may be conceptually represented as a linear one-dimensional barcode. However, in contrast to a traditional binary barcode approach that considers only two possible bits of information (1 & 0), I will use colour and molecular structure to expand the variety of information represented in the barcode. Like colourful beads threaded on a string, where each bead represents a distinct type of observable, I will label each type of genomic information with a different chemical moiety thus expanding the repertoire of information that can be simultaneously measured. A major effort in this proposal is invested in the development of unique chemistries to enable this labelling.
I specifically address three types of genomic variation: Variations in genomic layout (including DNA repeats, structural and copy number variations), variations in the patterns of chemical DNA modifications (such as methylation of cytosine bases) and variations in the chromatin composition (including nucleosome and transcription factor distributions). I will use physical extension of long DNA molecules on surfaces and in nanofluidic channels to reveal this information visually in the form of a linear, fluorescent “barcode” that is read-out by advanced imaging techniques. Similarly, DNA molecules will be threaded through a nanopore where the sequential position of “bulky” molecular groups attached to the DNA may be inferred from temporal modulation of an ionic current measured across the pore.
Max ERC Funding
1 627 600 €
Duration
Start date: 2013-10-01, End date: 2018-09-30
Project acronym BIAF
Project Bird Inspired Autonomous Flight
Researcher (PI) Shane Paul Windsor
Host Institution (HI) UNIVERSITY OF BRISTOL
Call Details Starting Grant (StG), PE8, ERC-2015-STG
Summary The agile and efficient flight of birds shows what flight performance is physically possible, and in theory could be achieved by unmanned air vehicles (UAVs) of the same size. The overall aim of this project is to enhance the performance of small scale UAVs by developing novel technologies inspired by understanding how birds are adapted to interact with airflows. Small UAVs have the potential to dramatically change current practices in many areas such as, search and rescue, surveillance, and environmental monitoring. Currently the utility of these systems is limited by their operational endurance and their inability to operate in strong turbulent winds, especially those that often occur in urban environments. Birds are adapted to be able to fly in these conditions and actually use them to their advantage to minimise their energy output.
This project is composed of three tracks which contain elements of technology development, as well as scientific investigation looking at bird flight behaviour and aerodynamics. The first track looks at developing path planning algorithms for UAVs in urban environments based on how birds fly in these areas, by using GPS tracking and computational fluid dynamics alongside trajectory optimization. The second track aims to develop artificial wings with improved gust tolerance inspired by the features of feathered wings. Here, high speed video measurements of birds flying through gusts will be used alongside wind tunnel testing of artificial wings to discover what features of a bird’s wing help to alleviate gusts. The third track develops novel force and flow sensor arrays for autonomous flight control based on the sensor arrays found in flying animals. These arrays will be used to make UAVs with increased agility and robustness. This unique bird inspired approach uses biology to show what is possible, and engineering to find the features that enable this performance and develop them into functional technologies.
Summary
The agile and efficient flight of birds shows what flight performance is physically possible, and in theory could be achieved by unmanned air vehicles (UAVs) of the same size. The overall aim of this project is to enhance the performance of small scale UAVs by developing novel technologies inspired by understanding how birds are adapted to interact with airflows. Small UAVs have the potential to dramatically change current practices in many areas such as, search and rescue, surveillance, and environmental monitoring. Currently the utility of these systems is limited by their operational endurance and their inability to operate in strong turbulent winds, especially those that often occur in urban environments. Birds are adapted to be able to fly in these conditions and actually use them to their advantage to minimise their energy output.
This project is composed of three tracks which contain elements of technology development, as well as scientific investigation looking at bird flight behaviour and aerodynamics. The first track looks at developing path planning algorithms for UAVs in urban environments based on how birds fly in these areas, by using GPS tracking and computational fluid dynamics alongside trajectory optimization. The second track aims to develop artificial wings with improved gust tolerance inspired by the features of feathered wings. Here, high speed video measurements of birds flying through gusts will be used alongside wind tunnel testing of artificial wings to discover what features of a bird’s wing help to alleviate gusts. The third track develops novel force and flow sensor arrays for autonomous flight control based on the sensor arrays found in flying animals. These arrays will be used to make UAVs with increased agility and robustness. This unique bird inspired approach uses biology to show what is possible, and engineering to find the features that enable this performance and develop them into functional technologies.
Max ERC Funding
1 998 546 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym BIOELE
Project Functional Biointerface Elements via Biomicrofabrication
Researcher (PI) YANYAN HUANG
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), PE8, ERC-2017-STG
Summary Imagine in the future, bionic devices that can merge device and biology which can perform molecular sensing, simulate the functions of grown-organs in the lab, or even replace or improve parts of the organ as smart implants? Such bionic devices is set to transform a number of emerging fields, including synthetic biotechnology, regenerative medicine, and human-machine interfaces. Merging biology and man-made devices also mean that materials of vastly different properties need to be seamlessly integrated. One of the promising strategies to manufacture these devices is through 3D printing, which can structure different materials into functional devices, and simultaneously intertwining with biological matters. However, the requirement for biocompatibility, miniaturisation, portability and high performance in bionic devices pushes the current limit for micro- nanoscale 3D printing.
This proposal aims to develop a new multi-material, cross-length scale biofabrication platform, with specific focus in making future smart bionic devices. In particular, a new mechanism is proposed to smoothly interface diverse classes of materials, such that an active device component can be ‘shrunk’ into a single small fibre. This mechanism utilises the polymeric materials’ flow property under applied tensile forces, and their abilities to combine with other classes of materials, such as semi-conductors and metals to impart further functionalities. This smart device fibre can be custom-made to perform different tasks, such as light emission or energy harvesting, to bridge 3D bioprinting for the future creation of high performance, compact, and cell-friendly bionic and medical devices.
Summary
Imagine in the future, bionic devices that can merge device and biology which can perform molecular sensing, simulate the functions of grown-organs in the lab, or even replace or improve parts of the organ as smart implants? Such bionic devices is set to transform a number of emerging fields, including synthetic biotechnology, regenerative medicine, and human-machine interfaces. Merging biology and man-made devices also mean that materials of vastly different properties need to be seamlessly integrated. One of the promising strategies to manufacture these devices is through 3D printing, which can structure different materials into functional devices, and simultaneously intertwining with biological matters. However, the requirement for biocompatibility, miniaturisation, portability and high performance in bionic devices pushes the current limit for micro- nanoscale 3D printing.
This proposal aims to develop a new multi-material, cross-length scale biofabrication platform, with specific focus in making future smart bionic devices. In particular, a new mechanism is proposed to smoothly interface diverse classes of materials, such that an active device component can be ‘shrunk’ into a single small fibre. This mechanism utilises the polymeric materials’ flow property under applied tensile forces, and their abilities to combine with other classes of materials, such as semi-conductors and metals to impart further functionalities. This smart device fibre can be custom-made to perform different tasks, such as light emission or energy harvesting, to bridge 3D bioprinting for the future creation of high performance, compact, and cell-friendly bionic and medical devices.
Max ERC Funding
1 486 938 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym BIOIONS
Project Biological ions in the gas-phase: New techniques for structural characterization of isolated biomolecular ions
Researcher (PI) Caroline Dessent
Host Institution (HI) UNIVERSITY OF YORK
Call Details Starting Grant (StG), PE4, ERC-2007-StG
Summary Recent intensive research on the laser spectroscopy of neutral gas-phase biomolecules has yielded a detailed picture of their structures and conformational preferences away from the complications of the bulk environment. In contrast, work on ionic systems has been sparse despite the fact that many important molecular groups are charged under physiological conditions. To address this probelm, we have developed a custom-built laser spectrometer, which incorporates a distincitive electrospray ionisation (ESI) cluster ion source, dedicated to producing biological anions (ATP,oligonucleotides) and their microsolvated clusters for structural characterization. Many previous laser spectrometers with ESI sources have suffered from producing "hot" congested spectra as the ions were produced at ambient temperatures. This is a particularly serious limitation for spectroscopic studies of biomolecules, since these systems can possess high internal energies due tothe presence of numerous low frequency modes. Our spectrometer overcomes this problem by exploiting the newly developed physics technique of "buffer gas cooling" to produce cold ESI molecular ions. In this proposal, we now seek to exploit the new laser-spectrometer to perform detailed spectroscopic interrogations of ESI generated biomolecular anions and clusters. In addition to traditional ion-dissociation spectroscopies, we propose to develop two new laser spectroscopy techniques (Two-color tuneable IR spectroscopy and Dipole-bound excited state spectroscopy) to give the broadest possible structural characterizations of the systems of interest. Studies will focus on ATP/GTP-anions, olignonucleotides, and sulphated and carboxylated sugars. These methodologies will provide a general approach for performing temperature-controlled spectroscopic characterizations of isolated biological ions, with measurements on the corresponding micro-solvated clusters providing details of how the molecules are perturbed by solvent.
Summary
Recent intensive research on the laser spectroscopy of neutral gas-phase biomolecules has yielded a detailed picture of their structures and conformational preferences away from the complications of the bulk environment. In contrast, work on ionic systems has been sparse despite the fact that many important molecular groups are charged under physiological conditions. To address this probelm, we have developed a custom-built laser spectrometer, which incorporates a distincitive electrospray ionisation (ESI) cluster ion source, dedicated to producing biological anions (ATP,oligonucleotides) and their microsolvated clusters for structural characterization. Many previous laser spectrometers with ESI sources have suffered from producing "hot" congested spectra as the ions were produced at ambient temperatures. This is a particularly serious limitation for spectroscopic studies of biomolecules, since these systems can possess high internal energies due tothe presence of numerous low frequency modes. Our spectrometer overcomes this problem by exploiting the newly developed physics technique of "buffer gas cooling" to produce cold ESI molecular ions. In this proposal, we now seek to exploit the new laser-spectrometer to perform detailed spectroscopic interrogations of ESI generated biomolecular anions and clusters. In addition to traditional ion-dissociation spectroscopies, we propose to develop two new laser spectroscopy techniques (Two-color tuneable IR spectroscopy and Dipole-bound excited state spectroscopy) to give the broadest possible structural characterizations of the systems of interest. Studies will focus on ATP/GTP-anions, olignonucleotides, and sulphated and carboxylated sugars. These methodologies will provide a general approach for performing temperature-controlled spectroscopic characterizations of isolated biological ions, with measurements on the corresponding micro-solvated clusters providing details of how the molecules are perturbed by solvent.
Max ERC Funding
1 250 000 €
Duration
Start date: 2008-10-01, End date: 2015-06-30
Project acronym BIONET
Project Network Topology Complements Genome as a Source of Biological Information
Researcher (PI) Natasa Przulj
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary Genetic sequences have had an enormous impact on our understanding of biology. The expectation is that biological network data will have a similar impact. However, progress is hindered by a lack of sophisticated graph theoretic tools that will mine these large networked datasets.
In recent breakthrough work at the boundary of computer science and biology supported by my USA NSF CAREER award, I developed sensitive network analysis, comparison and embedding tools which demonstrated that protein-protein interaction networks of eukaryotes are best modeled by geometric graphs. Also, they established phenotypically validated, unprecedented link between network topology and biological function and disease. Now I propose to substantially extend these preliminary results and design sensitive and robust network alignment methods that will lead to uncovering unknown biology and evolutionary relationships. The potential ground-breaking impact of such network alignment tools could be parallel to the impact the BLAST family of sequence alignment tools that have revolutionized our understanding of biological systems and therapeutics. Furthermore, I propose to develop additional sophisticated graph theoretic techniques to mine network data and hence complement biological information that can be extracted from sequence. I propose to exploit these new techniques for biological applications in collaboration with experimentalists at Imperial College London: 1. aligning biological networks of species whose genomes are closely related, but that have very different phenotypes, in order to uncover systems-level factors that contribute to pronounced differences; 2. compare and contrast stress response pathways and metabolic pathways in bacteria in a unified systems-level framework and exploit the findings for: (a) bioengineering of micro-organisms for industrial applications (production of bio-fuels, bioremediation, production of biopolymers); (b) biomedical applications.
Summary
Genetic sequences have had an enormous impact on our understanding of biology. The expectation is that biological network data will have a similar impact. However, progress is hindered by a lack of sophisticated graph theoretic tools that will mine these large networked datasets.
In recent breakthrough work at the boundary of computer science and biology supported by my USA NSF CAREER award, I developed sensitive network analysis, comparison and embedding tools which demonstrated that protein-protein interaction networks of eukaryotes are best modeled by geometric graphs. Also, they established phenotypically validated, unprecedented link between network topology and biological function and disease. Now I propose to substantially extend these preliminary results and design sensitive and robust network alignment methods that will lead to uncovering unknown biology and evolutionary relationships. The potential ground-breaking impact of such network alignment tools could be parallel to the impact the BLAST family of sequence alignment tools that have revolutionized our understanding of biological systems and therapeutics. Furthermore, I propose to develop additional sophisticated graph theoretic techniques to mine network data and hence complement biological information that can be extracted from sequence. I propose to exploit these new techniques for biological applications in collaboration with experimentalists at Imperial College London: 1. aligning biological networks of species whose genomes are closely related, but that have very different phenotypes, in order to uncover systems-level factors that contribute to pronounced differences; 2. compare and contrast stress response pathways and metabolic pathways in bacteria in a unified systems-level framework and exploit the findings for: (a) bioengineering of micro-organisms for industrial applications (production of bio-fuels, bioremediation, production of biopolymers); (b) biomedical applications.
Max ERC Funding
1 638 175 €
Duration
Start date: 2012-01-01, End date: 2017-12-31
Project acronym BioNet
Project Dynamical Redesign of Biomolecular Networks
Researcher (PI) Edina ROSTA
Host Institution (HI) KING'S COLLEGE LONDON
Call Details Starting Grant (StG), PE4, ERC-2017-STG
Summary Enzymes created by Nature are still more selective and can be orders of magnitude more efficient than man-made catalysts, in spite of recent advances in the design of de novo catalysts and in enzyme redesign. The optimal engineering of either small molecular or of complex biological catalysts requires both (i) accurate quantitative computational methods capable of a priori assessing catalytic efficiency, and (ii) molecular design principles and corresponding algorithms to achieve, understand and control biomolecular catalytic function and mechanisms. Presently, the computational design of biocatalysts is challenging due to the need for accurate yet computationally-intensive quantum mechanical calculations of bond formation and cleavage, as well as to the requirement for proper statistical sampling over very many degrees of freedom. Pioneering enhanced sampling and analysis methods have been developed to address crucial challenges bridging the gap between the available simulation length and the biologically relevant timescales. However, biased simulations do not generally permit the direct calculation of kinetic information. Recently, I and others pioneered simulation tools that can enable not only accurate calculations of free energies, but also of the intrinsic molecular kinetics and the underlying reaction mechanisms as well. I propose to develop more robust, automatic, and system-tailored sampling algorithms that are optimal in each case. I will use our kinetics-based methods to develop a novel theoretical framework to address catalytic efficiency and to establish molecular design principles to key design problems for new bio-inspired nanocatalysts, and to identify and characterize small molecule modulators of enzyme activity. This is a highly interdisciplinary project that will enable fundamental advances in molecular simulations and will unveil the physical principles that will lead to design and control of catalysis with Nature-like efficiency.
Summary
Enzymes created by Nature are still more selective and can be orders of magnitude more efficient than man-made catalysts, in spite of recent advances in the design of de novo catalysts and in enzyme redesign. The optimal engineering of either small molecular or of complex biological catalysts requires both (i) accurate quantitative computational methods capable of a priori assessing catalytic efficiency, and (ii) molecular design principles and corresponding algorithms to achieve, understand and control biomolecular catalytic function and mechanisms. Presently, the computational design of biocatalysts is challenging due to the need for accurate yet computationally-intensive quantum mechanical calculations of bond formation and cleavage, as well as to the requirement for proper statistical sampling over very many degrees of freedom. Pioneering enhanced sampling and analysis methods have been developed to address crucial challenges bridging the gap between the available simulation length and the biologically relevant timescales. However, biased simulations do not generally permit the direct calculation of kinetic information. Recently, I and others pioneered simulation tools that can enable not only accurate calculations of free energies, but also of the intrinsic molecular kinetics and the underlying reaction mechanisms as well. I propose to develop more robust, automatic, and system-tailored sampling algorithms that are optimal in each case. I will use our kinetics-based methods to develop a novel theoretical framework to address catalytic efficiency and to establish molecular design principles to key design problems for new bio-inspired nanocatalysts, and to identify and characterize small molecule modulators of enzyme activity. This is a highly interdisciplinary project that will enable fundamental advances in molecular simulations and will unveil the physical principles that will lead to design and control of catalysis with Nature-like efficiency.
Max ERC Funding
1 499 999 €
Duration
Start date: 2018-02-01, End date: 2023-01-31
Project acronym BRiCPT
Project Basic Research in Cryptographic Protocol Theory
Researcher (PI) Jesper Buus Nielsen
Host Institution (HI) AARHUS UNIVERSITET
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary In cryptographic protocol theory, we consider a situation where a number of entities want to solve some problem over a computer network. Each entity has some secret data it does not want the other entities to learn, yet, they all want to learn something about the common set of data. In an electronic election, they want to know the number of yes-votes without revealing who voted what. For instance, in an electronic auction, they want to find the winner without leaking the bids of the losers.
A main focus of the project is to develop new techniques for solving such protocol problems. We are in particular interested in techniques which can automatically construct a protocol solving a problem given only a description of what the problem is. My focus will be theoretical basic research, but I believe that advancing the theory of secure protocol compilers will have an immense impact on the practice of developing secure protocols for practice.
When one develops complex protocols, it is important to be able to verify their correctness before they are deployed, in particular so, when the purpose of the protocols is to protect information. If and when an error is found and corrected, the sensitive data will possibly already be compromised. Therefore, cryptographic protocol theory develops models of what it means for a protocol to be secure, and techniques for analyzing whether a given protocol is secure or not.
A main focuses of the project is to develop better security models, as existing security models either suffer from the problem that it is possible to prove some protocols secure which are not secure in practice, or they suffer from the problem that it is impossible to prove security of some protocol which are believed to be secure in practice. My focus will again be on theoretical basic research, but I believe that better security models are important for advancing a practice where protocols are verified as secure before deployed.
Summary
In cryptographic protocol theory, we consider a situation where a number of entities want to solve some problem over a computer network. Each entity has some secret data it does not want the other entities to learn, yet, they all want to learn something about the common set of data. In an electronic election, they want to know the number of yes-votes without revealing who voted what. For instance, in an electronic auction, they want to find the winner without leaking the bids of the losers.
A main focus of the project is to develop new techniques for solving such protocol problems. We are in particular interested in techniques which can automatically construct a protocol solving a problem given only a description of what the problem is. My focus will be theoretical basic research, but I believe that advancing the theory of secure protocol compilers will have an immense impact on the practice of developing secure protocols for practice.
When one develops complex protocols, it is important to be able to verify their correctness before they are deployed, in particular so, when the purpose of the protocols is to protect information. If and when an error is found and corrected, the sensitive data will possibly already be compromised. Therefore, cryptographic protocol theory develops models of what it means for a protocol to be secure, and techniques for analyzing whether a given protocol is secure or not.
A main focuses of the project is to develop better security models, as existing security models either suffer from the problem that it is possible to prove some protocols secure which are not secure in practice, or they suffer from the problem that it is impossible to prove security of some protocol which are believed to be secure in practice. My focus will again be on theoretical basic research, but I believe that better security models are important for advancing a practice where protocols are verified as secure before deployed.
Max ERC Funding
1 171 019 €
Duration
Start date: 2011-12-01, End date: 2016-11-30
Project acronym BroadSem
Project Induction of Broad-Coverage Semantic Parsers
Researcher (PI) Ivan Titov
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary In the last one or two decades, language technology has achieved a number of important successes, for example, producing functional machine translation systems and beating humans in quiz games. The key bottleneck which prevents further progress in these and many other natural language processing (NLP) applications (e.g., text summarization, information retrieval, opinion mining, dialog and tutoring systems) is the lack of accurate methods for producing meaning representations of texts. Accurately predicting such meaning representations on an open domain with an automatic parser is a challenging and unsolved problem, primarily because of language variability and ambiguity. The reason for the unsatisfactory performance is reliance on supervised learning (learning from annotated resources), with the amounts of annotation required for accurate open-domain parsing exceeding what is practically feasible. Moreover, representations defined in these resources typically do not provide abstractions suitable for reasoning.
In this project, we will induce semantic representations from large amounts of unannotated data (i.e. text which has not been labeled by humans) while guided by information contained in human-annotated data and other forms of linguistic knowledge. This will allow us to scale our approach to many domains and across languages. We will specialize meaning representations for reasoning by modeling relations (e.g., facts) appearing across sentences in texts (document-level modeling), across different texts, and across texts and knowledge bases. Learning to predict this linked data is closely related to learning to reason, including learning the notions of semantic equivalence and entailment. We will jointly induce semantic parsers (e.g., log-linear feature-rich models) and reasoning models (latent factor models) relying on this data, thus, ensuring that the semantic representations are informative for applications requiring reasoning.
Summary
In the last one or two decades, language technology has achieved a number of important successes, for example, producing functional machine translation systems and beating humans in quiz games. The key bottleneck which prevents further progress in these and many other natural language processing (NLP) applications (e.g., text summarization, information retrieval, opinion mining, dialog and tutoring systems) is the lack of accurate methods for producing meaning representations of texts. Accurately predicting such meaning representations on an open domain with an automatic parser is a challenging and unsolved problem, primarily because of language variability and ambiguity. The reason for the unsatisfactory performance is reliance on supervised learning (learning from annotated resources), with the amounts of annotation required for accurate open-domain parsing exceeding what is practically feasible. Moreover, representations defined in these resources typically do not provide abstractions suitable for reasoning.
In this project, we will induce semantic representations from large amounts of unannotated data (i.e. text which has not been labeled by humans) while guided by information contained in human-annotated data and other forms of linguistic knowledge. This will allow us to scale our approach to many domains and across languages. We will specialize meaning representations for reasoning by modeling relations (e.g., facts) appearing across sentences in texts (document-level modeling), across different texts, and across texts and knowledge bases. Learning to predict this linked data is closely related to learning to reason, including learning the notions of semantic equivalence and entailment. We will jointly induce semantic parsers (e.g., log-linear feature-rich models) and reasoning models (latent factor models) relying on this data, thus, ensuring that the semantic representations are informative for applications requiring reasoning.
Max ERC Funding
1 457 185 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym BUNGEE-TOOLS
Project Building Next-Generation Computational Tools for High Resolution Neuroimaging Studies
Researcher (PI) Juan Eugenio Iglesias
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary Recent advances in magnetic resonance (MR) acquisition technology are providing us with images of the human brain of increasing detail and resolution. While these images hold promise to greatly increase our understanding of such a complex organ, the neuroimaging community relies on tools (e.g. SPM, FSL, FreeSurfer) which, being over a decade old, were designed to work at much lower resolutions. These tools do not consider brain substructures that are visible in present-day scans, and this inability to capitalize on the vast improvement of MR is hampering progress in the neuroimaging field.
In this ambitious project, which lies at the nexus of medical histology, neuroscience, biomedical imaging, computer vision and statistics, we propose to build a set of next-generation computational tools that will enable neuroimaging studies to take full advantage of the increased resolution of modern MR technology. The core of the tools will be an ultra-high resolution probabilistic atlas of the human brain, built upon multimodal data combining from histology and ex vivo MR. The resulting atlas will be used to analyze in vivo brain MR scans, which will require the development of Bayesian segmentation methods beyond the state of the art.
The developed tools, which will be made freely available to the scientific community, will enable the analysis of MR data at a superior level of structural detail, opening completely new opportunities of research in neuroscience. Therefore, we expect the tools to have a tremendous impact on the quest to understand the human brain (in health and in disease), and ultimately on public health and the economy.
Summary
Recent advances in magnetic resonance (MR) acquisition technology are providing us with images of the human brain of increasing detail and resolution. While these images hold promise to greatly increase our understanding of such a complex organ, the neuroimaging community relies on tools (e.g. SPM, FSL, FreeSurfer) which, being over a decade old, were designed to work at much lower resolutions. These tools do not consider brain substructures that are visible in present-day scans, and this inability to capitalize on the vast improvement of MR is hampering progress in the neuroimaging field.
In this ambitious project, which lies at the nexus of medical histology, neuroscience, biomedical imaging, computer vision and statistics, we propose to build a set of next-generation computational tools that will enable neuroimaging studies to take full advantage of the increased resolution of modern MR technology. The core of the tools will be an ultra-high resolution probabilistic atlas of the human brain, built upon multimodal data combining from histology and ex vivo MR. The resulting atlas will be used to analyze in vivo brain MR scans, which will require the development of Bayesian segmentation methods beyond the state of the art.
The developed tools, which will be made freely available to the scientific community, will enable the analysis of MR data at a superior level of structural detail, opening completely new opportunities of research in neuroscience. Therefore, we expect the tools to have a tremendous impact on the quest to understand the human brain (in health and in disease), and ultimately on public health and the economy.
Max ERC Funding
1 450 075 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym CAC
Project Cryptography and Complexity
Researcher (PI) Yuval Ishai
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary Modern cryptography has deeply rooted connections with computational complexity theory and other areas of computer science. This proposal suggests to explore several {\em new connections} between questions in cryptography and questions from other domains, including computational complexity, coding theory, and even the natural sciences. The project is expected to broaden the impact of ideas from cryptography on other domains, and on the other hand to benefit cryptography by applying tools from other domains towards better solutions for central problems in cryptography.
Summary
Modern cryptography has deeply rooted connections with computational complexity theory and other areas of computer science. This proposal suggests to explore several {\em new connections} between questions in cryptography and questions from other domains, including computational complexity, coding theory, and even the natural sciences. The project is expected to broaden the impact of ideas from cryptography on other domains, and on the other hand to benefit cryptography by applying tools from other domains towards better solutions for central problems in cryptography.
Max ERC Funding
1 459 703 €
Duration
Start date: 2010-12-01, End date: 2015-11-30