Project acronym 3D-E
Project 3D Engineered Environments for Regenerative Medicine
Researcher (PI) Ruth Elizabeth Cameron
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary "This proposal develops a unified, underpinning technology to create novel, complex and biomimetic 3D environments for the control of tissue growth. As director of Cambridge Centre for Medical Materials, I have recently been approached by medical colleagues to help to solve important problems in the separate therapeutic areas of breast cancer, cardiac disease and blood disorders. In each case, the solution lies in complex 3D engineered environments for cell culture. These colleagues make it clear that existing 3D scaffolds fail to provide the required complex orientational and spatial anisotropy, and are limited in their ability to impart appropriate biochemical and mechanical cues.
I have a strong track record in this area. A particular success has been the use of a freeze drying technology to make collagen based porous implants for the cartilage-bone interface in the knee, which has now been commercialised. The novelty of this proposal lies in the broadening of the established scientific base of this technology to enable biomacromolecular structures with:
(A) controlled and complex pore orientation to mimic many normal multi-oriented tissue structures
(B) compositional and positional control to match varying local biochemical environments,
(C) the attachment of novel peptides designed to control cell behaviour, and
(D) mechanical control at both a local and macroscopic level to provide mechanical cues for cells.
These will be complemented by the development of
(E) robust characterisation methodologies for the structures created.
These advances will then be employed in each of the medical areas above.
This approach is highly interdisciplinary. Existing working relationships with experts in each medical field will guarantee expertise and licensed facilities in the required biological disciplines. Funds for this proposal would therefore establish a rich hub of mutually beneficial research and opportunities for cross-disciplinary sharing of expertise."
Summary
"This proposal develops a unified, underpinning technology to create novel, complex and biomimetic 3D environments for the control of tissue growth. As director of Cambridge Centre for Medical Materials, I have recently been approached by medical colleagues to help to solve important problems in the separate therapeutic areas of breast cancer, cardiac disease and blood disorders. In each case, the solution lies in complex 3D engineered environments for cell culture. These colleagues make it clear that existing 3D scaffolds fail to provide the required complex orientational and spatial anisotropy, and are limited in their ability to impart appropriate biochemical and mechanical cues.
I have a strong track record in this area. A particular success has been the use of a freeze drying technology to make collagen based porous implants for the cartilage-bone interface in the knee, which has now been commercialised. The novelty of this proposal lies in the broadening of the established scientific base of this technology to enable biomacromolecular structures with:
(A) controlled and complex pore orientation to mimic many normal multi-oriented tissue structures
(B) compositional and positional control to match varying local biochemical environments,
(C) the attachment of novel peptides designed to control cell behaviour, and
(D) mechanical control at both a local and macroscopic level to provide mechanical cues for cells.
These will be complemented by the development of
(E) robust characterisation methodologies for the structures created.
These advances will then be employed in each of the medical areas above.
This approach is highly interdisciplinary. Existing working relationships with experts in each medical field will guarantee expertise and licensed facilities in the required biological disciplines. Funds for this proposal would therefore establish a rich hub of mutually beneficial research and opportunities for cross-disciplinary sharing of expertise."
Max ERC Funding
2 486 267 €
Duration
Start date: 2013-04-01, End date: 2018-03-31
Project acronym ACOULOMODE
Project Advanced coupling of low order combustor simulations with thermoacoustic modelling and controller design
Researcher (PI) Aimee Morgans
Host Institution (HI) IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary "Combustion is essential to the world’s energy generation and transport needs, and will remain so for the foreseeable future. Mitigating its impact on the climate and human health, by reducing its associated emissions, is thus a priority. One significant challenge for gas-turbine combustion is combustion instability, which is currently inhibiting reductions in NOx emissions (these damage human health via a deterioration in air quality). Combustion instability is caused by a two-way coupling between unsteady combustion and acoustic waves - the large pressure oscillations that result can cause substantial mechanical damage. Currently, the lack of fast, accurate modelling tools for combustion instability, and the lack of reliable ways of suppressing it are severely hindering reductions in NOx emissions.
This proposal aims to make step improvements in both fast, accurate modelling of combustion instability, and in developing reliable active control strategies for its suppression. It will achieve this by coupling low order combustor models (these are fast, simplified models for simulating combustion instability) with advances in analytical modelling, CFD simulation, reduced order modelling and control theory tools. In particular:
* important advances in accurately incorporating the effect of entropy waves (temperature variations resulting from unsteady combustion) and non-linear flame models will be made;
* new active control strategies for achieving reliable suppression of combustion instability, including from within limit cycle oscillations, will be developed;
* an open-source low order combustor modelling tool will be developed and widely disseminated, opening access to researchers worldwide and improving communications between the fields of thermoacoustics and control theory.
Thus the proposal aims to use analytical and computational methods to contribute to achieving low NOx gas-turbine combustion, without the penalty of damaging combustion instability."
Summary
"Combustion is essential to the world’s energy generation and transport needs, and will remain so for the foreseeable future. Mitigating its impact on the climate and human health, by reducing its associated emissions, is thus a priority. One significant challenge for gas-turbine combustion is combustion instability, which is currently inhibiting reductions in NOx emissions (these damage human health via a deterioration in air quality). Combustion instability is caused by a two-way coupling between unsteady combustion and acoustic waves - the large pressure oscillations that result can cause substantial mechanical damage. Currently, the lack of fast, accurate modelling tools for combustion instability, and the lack of reliable ways of suppressing it are severely hindering reductions in NOx emissions.
This proposal aims to make step improvements in both fast, accurate modelling of combustion instability, and in developing reliable active control strategies for its suppression. It will achieve this by coupling low order combustor models (these are fast, simplified models for simulating combustion instability) with advances in analytical modelling, CFD simulation, reduced order modelling and control theory tools. In particular:
* important advances in accurately incorporating the effect of entropy waves (temperature variations resulting from unsteady combustion) and non-linear flame models will be made;
* new active control strategies for achieving reliable suppression of combustion instability, including from within limit cycle oscillations, will be developed;
* an open-source low order combustor modelling tool will be developed and widely disseminated, opening access to researchers worldwide and improving communications between the fields of thermoacoustics and control theory.
Thus the proposal aims to use analytical and computational methods to contribute to achieving low NOx gas-turbine combustion, without the penalty of damaging combustion instability."
Max ERC Funding
1 489 309 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym ActiveWindFarms
Project Active Wind Farms: Optimization and Control of Atmospheric Energy Extraction in Gigawatt Wind Farms
Researcher (PI) Johan Meyers
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary With the recognition that wind energy will become an important contributor to the world’s energy portfolio, several wind farms with a capacity of over 1 gigawatt are in planning phase. In the past, engineering of wind farms focused on a bottom-up approach, in which atmospheric wind availability was considered to be fixed by climate and weather. However, farms of gigawatt size slow down the Atmospheric Boundary Layer (ABL) as a whole, reducing the availability of wind at turbine hub height. In Denmark’s large off-shore farms, this leads to underperformance of turbines which can reach levels of 40%–50% compared to the same turbine in a lone-standing case. For large wind farms, the vertical structure and turbulence physics of the flow in the ABL become crucial ingredients in their design and operation. This introduces a new set of scientific challenges related to the design and control of large wind farms. The major ambition of the present research proposal is to employ optimal control techniques to control the interaction between large wind farms and the ABL, and optimize overall farm-power extraction. Individual turbines are used as flow actuators by dynamically pitching their blades using time scales ranging between 10 to 500 seconds. The application of such control efforts on the atmospheric boundary layer has never been attempted before, and introduces flow control on a physical scale which is currently unprecedented. The PI possesses a unique combination of expertise and tools enabling these developments: efficient parallel large-eddy simulations of wind farms, multi-scale turbine modeling, and gradient-based optimization in large optimization-parameter spaces using adjoint formulations. To ensure a maximum impact on the wind-engineering field, the project aims at optimal control, experimental wind-tunnel validation, and at including multi-disciplinary aspects, related to structural mechanics, power quality, and controller design.
Summary
With the recognition that wind energy will become an important contributor to the world’s energy portfolio, several wind farms with a capacity of over 1 gigawatt are in planning phase. In the past, engineering of wind farms focused on a bottom-up approach, in which atmospheric wind availability was considered to be fixed by climate and weather. However, farms of gigawatt size slow down the Atmospheric Boundary Layer (ABL) as a whole, reducing the availability of wind at turbine hub height. In Denmark’s large off-shore farms, this leads to underperformance of turbines which can reach levels of 40%–50% compared to the same turbine in a lone-standing case. For large wind farms, the vertical structure and turbulence physics of the flow in the ABL become crucial ingredients in their design and operation. This introduces a new set of scientific challenges related to the design and control of large wind farms. The major ambition of the present research proposal is to employ optimal control techniques to control the interaction between large wind farms and the ABL, and optimize overall farm-power extraction. Individual turbines are used as flow actuators by dynamically pitching their blades using time scales ranging between 10 to 500 seconds. The application of such control efforts on the atmospheric boundary layer has never been attempted before, and introduces flow control on a physical scale which is currently unprecedented. The PI possesses a unique combination of expertise and tools enabling these developments: efficient parallel large-eddy simulations of wind farms, multi-scale turbine modeling, and gradient-based optimization in large optimization-parameter spaces using adjoint formulations. To ensure a maximum impact on the wind-engineering field, the project aims at optimal control, experimental wind-tunnel validation, and at including multi-disciplinary aspects, related to structural mechanics, power quality, and controller design.
Max ERC Funding
1 499 241 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym ACTIVIA
Project Visual Recognition of Function and Intention
Researcher (PI) Ivan Laptev
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Computer vision is concerned with the automated interpretation of images and video streams. Today's research is (mostly) aimed at answering queries such as ""Is this a picture of a dog?"", (classification) or sometimes ""Find the dog in this photo"" (detection). While categorisation and detection are useful for many tasks, inferring correct class labels is not the final answer to visual recognition. The categories and locations of objects do not provide direct understanding of their function i.e., how things work, what they can be used for, or how they can act and react. Such an understanding, however, would be highly desirable to answer currently unsolvable queries such as ""Am I in danger?"" or ""What can happen in this scene?"". Solving such queries is the aim of this proposal.
My goal is to uncover the functional properties of objects and the purpose of actions by addressing visual recognition from a different and yet unexplored perspective. The main novelty of this proposal is to leverage observations of people, i.e., their actions and interactions to automatically learn the use, the purpose and the function of objects and scenes from visual data. The project is timely as it builds upon the two key recent technological advances: (a) the immense progress in visual recognition of objects, scenes and human actions achieved in the last ten years, as well as (b) the emergence of a massive amount of public image and video data now available to train visual models.
ACTIVIA addresses fundamental research issues in automated interpretation of dynamic visual scenes, but its results are expected to serve as a basis for ground-breaking technological advances in practical applications. The recognition of functional properties and intentions as explored in this project will directly support high-impact applications such as detection of abnormal events, which are likely to revolutionise today's approaches to crime protection, hazard prevention, elderly care, and many others."
Summary
"Computer vision is concerned with the automated interpretation of images and video streams. Today's research is (mostly) aimed at answering queries such as ""Is this a picture of a dog?"", (classification) or sometimes ""Find the dog in this photo"" (detection). While categorisation and detection are useful for many tasks, inferring correct class labels is not the final answer to visual recognition. The categories and locations of objects do not provide direct understanding of their function i.e., how things work, what they can be used for, or how they can act and react. Such an understanding, however, would be highly desirable to answer currently unsolvable queries such as ""Am I in danger?"" or ""What can happen in this scene?"". Solving such queries is the aim of this proposal.
My goal is to uncover the functional properties of objects and the purpose of actions by addressing visual recognition from a different and yet unexplored perspective. The main novelty of this proposal is to leverage observations of people, i.e., their actions and interactions to automatically learn the use, the purpose and the function of objects and scenes from visual data. The project is timely as it builds upon the two key recent technological advances: (a) the immense progress in visual recognition of objects, scenes and human actions achieved in the last ten years, as well as (b) the emergence of a massive amount of public image and video data now available to train visual models.
ACTIVIA addresses fundamental research issues in automated interpretation of dynamic visual scenes, but its results are expected to serve as a basis for ground-breaking technological advances in practical applications. The recognition of functional properties and intentions as explored in this project will directly support high-impact applications such as detection of abnormal events, which are likely to revolutionise today's approaches to crime protection, hazard prevention, elderly care, and many others."
Max ERC Funding
1 497 420 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym ADAPT
Project Theory and Algorithms for Adaptive Particle Simulation
Researcher (PI) Stephane Redon
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "During the twentieth century, the development of macroscopic engineering has been largely stimulated by progress in digital prototyping: cars, planes, boats, etc. are nowadays designed and tested on computers. Digital prototypes have progressively replaced actual ones, and effective computer-aided engineering tools have helped cut costs and reduce production cycles of these macroscopic systems.
The twenty-first century is most likely to see a similar development at the atomic scale. Indeed, the recent years have seen tremendous progress in nanotechnology - in particular in the ability to control matter at the atomic scale. Similar to what has happened with macroscopic engineering, powerful and generic computational tools will be needed to engineer complex nanosystems, through modeling and simulation. As a result, a major challenge is to develop efficient simulation methods and algorithms.
NANO-D, the INRIA research group I started in January 2008 in Grenoble, France, aims at developing
efficient computational methods for modeling and simulating complex nanosystems, both natural and artificial. In particular, NANO-D develops SAMSON, a software application which gathers all algorithms designed by the group and its collaborators (SAMSON: Software for Adaptive Modeling and Simulation Of Nanosystems).
In this project, I propose to develop a unified theory, and associated algorithms, for adaptive particle simulation. The proposed theory will avoid problems that plague current popular multi-scale or hybrid simulation approaches by simulating a single potential throughout the system, while allowing users to finely trade precision for computational speed.
I believe the full development of the adaptive particle simulation theory will have an important impact on current modeling and simulation practices, and will enable practical design of complex nanosystems on desktop computers, which should significantly boost the emergence of generic nano-engineering."
Summary
"During the twentieth century, the development of macroscopic engineering has been largely stimulated by progress in digital prototyping: cars, planes, boats, etc. are nowadays designed and tested on computers. Digital prototypes have progressively replaced actual ones, and effective computer-aided engineering tools have helped cut costs and reduce production cycles of these macroscopic systems.
The twenty-first century is most likely to see a similar development at the atomic scale. Indeed, the recent years have seen tremendous progress in nanotechnology - in particular in the ability to control matter at the atomic scale. Similar to what has happened with macroscopic engineering, powerful and generic computational tools will be needed to engineer complex nanosystems, through modeling and simulation. As a result, a major challenge is to develop efficient simulation methods and algorithms.
NANO-D, the INRIA research group I started in January 2008 in Grenoble, France, aims at developing
efficient computational methods for modeling and simulating complex nanosystems, both natural and artificial. In particular, NANO-D develops SAMSON, a software application which gathers all algorithms designed by the group and its collaborators (SAMSON: Software for Adaptive Modeling and Simulation Of Nanosystems).
In this project, I propose to develop a unified theory, and associated algorithms, for adaptive particle simulation. The proposed theory will avoid problems that plague current popular multi-scale or hybrid simulation approaches by simulating a single potential throughout the system, while allowing users to finely trade precision for computational speed.
I believe the full development of the adaptive particle simulation theory will have an important impact on current modeling and simulation practices, and will enable practical design of complex nanosystems on desktop computers, which should significantly boost the emergence of generic nano-engineering."
Max ERC Funding
1 476 882 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym ALGAME
Project Algorithms, Games, Mechanisms, and the Price of Anarchy
Researcher (PI) Elias Koutsoupias
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Summary
The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Max ERC Funding
2 461 000 €
Duration
Start date: 2013-04-01, End date: 2019-03-31
Project acronym ALLEGRO
Project Active large-scale learning for visual recognition
Researcher (PI) Cordelia Schmid
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary A massive and ever growing amount of digital image and video content
is available today, on sites such as
Flickr and YouTube, in audiovisual archives such as those of BBC and
INA, and in personal collections. In most cases, it comes with
additional information, such as text, audio or other metadata, that forms a
rather sparse and noisy, yet rich and diverse source of annotation,
ideally suited to emerging weakly supervised and active machine
learning technology. The ALLEGRO project will take visual recognition
to the next level by using this largely untapped source of data to
automatically learn visual models. The main research objective of
our project is the development of new algorithms and computer software
capable of autonomously exploring evolving data collections, selecting
the relevant information, and determining the visual models most
appropriate for different object, scene, and activity categories. An
emphasis will be put on learning visual models from video, a
particularly rich source of information, and on the representation of
human activities, one of today's most challenging problems in computer
vision. Although this project addresses fundamental research
issues, it is expected to result in significant advances in
high-impact applications that range from visual mining of the Web and
automated annotation and organization of family photo and video albums
to large-scale information retrieval in television archives.
Summary
A massive and ever growing amount of digital image and video content
is available today, on sites such as
Flickr and YouTube, in audiovisual archives such as those of BBC and
INA, and in personal collections. In most cases, it comes with
additional information, such as text, audio or other metadata, that forms a
rather sparse and noisy, yet rich and diverse source of annotation,
ideally suited to emerging weakly supervised and active machine
learning technology. The ALLEGRO project will take visual recognition
to the next level by using this largely untapped source of data to
automatically learn visual models. The main research objective of
our project is the development of new algorithms and computer software
capable of autonomously exploring evolving data collections, selecting
the relevant information, and determining the visual models most
appropriate for different object, scene, and activity categories. An
emphasis will be put on learning visual models from video, a
particularly rich source of information, and on the representation of
human activities, one of today's most challenging problems in computer
vision. Although this project addresses fundamental research
issues, it is expected to result in significant advances in
high-impact applications that range from visual mining of the Web and
automated annotation and organization of family photo and video albums
to large-scale information retrieval in television archives.
Max ERC Funding
2 493 322 €
Duration
Start date: 2013-04-01, End date: 2019-03-31
Project acronym BACKTOBACK
Project Engineering Solutions for Back Pain: Simulation of Patient Variance
Researcher (PI) Ruth Wilcox
Host Institution (HI) UNIVERSITY OF LEEDS
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Back pain affects eight out of ten adults during their lifetime. It a huge economic burden on society, estimated to cost as much as 1-2% of gross national product in several European countries. Treatments for back pain have lower levels of success and are not as technologically mature as those for other musculoskeletal disorders such as hip and knee replacement. This application proposes to tackle one of the major barriers to the development of better surgical treatments for back pain.
At present, new spinal devices are commonly assessed in isolation in the laboratory under standardised conditions that do not represent the variation across the patient population. Consequently many interventions have failed during clinical trials or have proved to have poor long term success rates.
Using a combination of computational and experimental models, a new testing methodology will be developed that will enable the variation between patients to be simulated for the first time. This will enable spinal implants and therapies to be more robustly evaluated across a virtual patient population prior to clinical trial. The tools developed will be used in collaboration with clinicians and basic scientists to develop and, crucially, optimise new treatments that reduce back pain whilst preserving the unique functions of the spine.
If successful, this approach could be translated to evaluate and optimise emerging minimally invasive treatments in other joints such as the hip and knee. Research in the spine could then, for the first time, lead rather than follow that undertaken in other branches of orthopaedics.
Summary
Back pain affects eight out of ten adults during their lifetime. It a huge economic burden on society, estimated to cost as much as 1-2% of gross national product in several European countries. Treatments for back pain have lower levels of success and are not as technologically mature as those for other musculoskeletal disorders such as hip and knee replacement. This application proposes to tackle one of the major barriers to the development of better surgical treatments for back pain.
At present, new spinal devices are commonly assessed in isolation in the laboratory under standardised conditions that do not represent the variation across the patient population. Consequently many interventions have failed during clinical trials or have proved to have poor long term success rates.
Using a combination of computational and experimental models, a new testing methodology will be developed that will enable the variation between patients to be simulated for the first time. This will enable spinal implants and therapies to be more robustly evaluated across a virtual patient population prior to clinical trial. The tools developed will be used in collaboration with clinicians and basic scientists to develop and, crucially, optimise new treatments that reduce back pain whilst preserving the unique functions of the spine.
If successful, this approach could be translated to evaluate and optimise emerging minimally invasive treatments in other joints such as the hip and knee. Research in the spine could then, for the first time, lead rather than follow that undertaken in other branches of orthopaedics.
Max ERC Funding
1 498 777 €
Duration
Start date: 2012-12-01, End date: 2018-11-30
Project acronym BeyondWorstCase
Project Algorithms beyond the Worst Case
Researcher (PI) Heiko Roglin
Host Institution (HI) RHEINISCHE FRIEDRICH-WILHELMS-UNIVERSITAT BONN
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary For many optimization problems that arise in logistics, information retrieval, and other contexts the classical theory of algorithms has lost its grip on reality because it is based on a pessimistic worst-case perspective, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. This does not take into consideration that worst-case inputs are often rather contrived and occur only rarely in practical applications. It led to the situation that for many problems the classical theory is not able to differentiate meaningfully between different algorithms. Even worse, for some important problems it recommends algorithms that perform badly in practice over algorithms that work well in practice only because the artificial worst-case performance of the latter ones is bad.
We will study classic optimization problems (traveling salesperson problem, linear programming, etc.) as well as problems coming from machine learning and information retrieval. All these problems have in common that the practically most successful algorithms have a devastating worst-case performance even though they clearly outperform the theoretically best algorithms.
Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. This project will play a major role in this paradigm shift by developing and exploring novel theoretical approaches (e.g. smoothed analysis) to reconcile theory and practice. A more realistic theory will have a profound impact on the design and analysis of algorithms in the future, and the insights gained in this project will lead to algorithmic tools for large-scale optimization problems that improve on existing ad hoc methods. We will not only work theoretically but also test the applicability of our theoretical considerations in experimental studies.
Summary
For many optimization problems that arise in logistics, information retrieval, and other contexts the classical theory of algorithms has lost its grip on reality because it is based on a pessimistic worst-case perspective, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. This does not take into consideration that worst-case inputs are often rather contrived and occur only rarely in practical applications. It led to the situation that for many problems the classical theory is not able to differentiate meaningfully between different algorithms. Even worse, for some important problems it recommends algorithms that perform badly in practice over algorithms that work well in practice only because the artificial worst-case performance of the latter ones is bad.
We will study classic optimization problems (traveling salesperson problem, linear programming, etc.) as well as problems coming from machine learning and information retrieval. All these problems have in common that the practically most successful algorithms have a devastating worst-case performance even though they clearly outperform the theoretically best algorithms.
Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. This project will play a major role in this paradigm shift by developing and exploring novel theoretical approaches (e.g. smoothed analysis) to reconcile theory and practice. A more realistic theory will have a profound impact on the design and analysis of algorithms in the future, and the insights gained in this project will lead to algorithmic tools for large-scale optimization problems that improve on existing ad hoc methods. We will not only work theoretically but also test the applicability of our theoretical considerations in experimental studies.
Max ERC Funding
1 235 820 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym BI-DSC
Project Building Integrated Dye Sensitized Solar Cells
Researcher (PI) Adélio Miguel Magalhaes Mendes
Host Institution (HI) UNIVERSIDADE DO PORTO
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary In the last decade, solar and photovoltaic (PV) technologies have emerged as a potentially major technology for power generation in the world. So far the PV field has been dominated by silicon devices, even though this technology is still expensive.Dye-sensitized solar cells (DSC) are an important type of thin-film photovoltaics due to their potential for low-cost fabrication and versatile applications, and because their aesthetic appearance, semi-transparency and different color possibilities.This advantageous characteristic makes DSC the first choice for building integrated photovoltaics.Despite their great potential, DSCs for building applications are still not available at commercial level. However, to bring DSCs to a marketable product several developments are still needed and the present project targets to give relevant answers to three key limitations: encapsulation, glass substrate enhanced electrical conductivity and more efficient and low-cost raw-materials. Recently, the proponent successfully addressed the hermetic devices sealing by developing a laser-assisted glass sealing procedure.Thus, BI-DSC proposal envisages the development of DSC modules 30x30cm2, containing four individual cells, and their incorporation in a 1m2 double glass sheet arrangement for BIPV with an energy efficiency of at least 9% and a lifetime of 20 years. Additionally, aiming at enhanced efficiency of the final device and decreased total costs of DSCs manufacturing, new materials will be also pursued. The following inner-components were identified as critical: carbon-based counter-electrode; carbon quantum-dots and hierarchically TiO2 photoelectrode. It is then clear that this project is divided into two research though parallel directions: a fundamental research line, contributing to the development of the new generation DSC technology; while a more applied research line targets the development of a DSC functional module that can be used to pave the way for its industrialization.
Summary
In the last decade, solar and photovoltaic (PV) technologies have emerged as a potentially major technology for power generation in the world. So far the PV field has been dominated by silicon devices, even though this technology is still expensive.Dye-sensitized solar cells (DSC) are an important type of thin-film photovoltaics due to their potential for low-cost fabrication and versatile applications, and because their aesthetic appearance, semi-transparency and different color possibilities.This advantageous characteristic makes DSC the first choice for building integrated photovoltaics.Despite their great potential, DSCs for building applications are still not available at commercial level. However, to bring DSCs to a marketable product several developments are still needed and the present project targets to give relevant answers to three key limitations: encapsulation, glass substrate enhanced electrical conductivity and more efficient and low-cost raw-materials. Recently, the proponent successfully addressed the hermetic devices sealing by developing a laser-assisted glass sealing procedure.Thus, BI-DSC proposal envisages the development of DSC modules 30x30cm2, containing four individual cells, and their incorporation in a 1m2 double glass sheet arrangement for BIPV with an energy efficiency of at least 9% and a lifetime of 20 years. Additionally, aiming at enhanced efficiency of the final device and decreased total costs of DSCs manufacturing, new materials will be also pursued. The following inner-components were identified as critical: carbon-based counter-electrode; carbon quantum-dots and hierarchically TiO2 photoelectrode. It is then clear that this project is divided into two research though parallel directions: a fundamental research line, contributing to the development of the new generation DSC technology; while a more applied research line targets the development of a DSC functional module that can be used to pave the way for its industrialization.
Max ERC Funding
1 989 300 €
Duration
Start date: 2013-03-01, End date: 2018-08-31
Project acronym BIMPC
Project Biologically-Inspired Massively-Parallel Computation
Researcher (PI) Stephen Byram Furber
Host Institution (HI) THE UNIVERSITY OF MANCHESTER
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary "We aim to establish a world-leading research capability in Europe for advancing novel models of asynchronous computation based upon principles inspired by brain function. This work will accelerate progress towards an understanding of how the potential of brain-inspired many-core architectures may be harnessed. The results will include new brain-inspired models of asynchronous computation and new brain- inspired approaches to fault-tolerance and reliability in complex computer systems.
Many-core processors are now established as the way forward for computing from embedded systems to supercomputers. An emerging problem with leading-edge silicon technology is a reduction in the yield and reliability of modern processors due to high variability in the manufacture of the components and interconnect as transistor geometries shrink towards atomic scales. We are faced with the longstanding problem of how to make use of a potentially large array of parallel processors, but with the new constraint that the individual elements are the system are inherently unreliable.
The human brain remains as one of the great frontiers of science – how does this organ upon which we all depend so critically actually do its job? A great deal is known about the underlying technology – the neuron – and we can observe large-scale brain activity through techniques such as magnetic resonance imaging, but this knowledge barely starts to tell us how the brain works. Something is happening at the intermediate levels of processing that we have yet to begin to understand, but the essence of the brain's massively-parallel information processing capabilities and robustness to component failure lies in these intermediate levels.
These two issues draws us towards two high-level research questions:
• Can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computing?
• Can massively parallel computing resources accelerate our understanding of brain function"
Summary
"We aim to establish a world-leading research capability in Europe for advancing novel models of asynchronous computation based upon principles inspired by brain function. This work will accelerate progress towards an understanding of how the potential of brain-inspired many-core architectures may be harnessed. The results will include new brain-inspired models of asynchronous computation and new brain- inspired approaches to fault-tolerance and reliability in complex computer systems.
Many-core processors are now established as the way forward for computing from embedded systems to supercomputers. An emerging problem with leading-edge silicon technology is a reduction in the yield and reliability of modern processors due to high variability in the manufacture of the components and interconnect as transistor geometries shrink towards atomic scales. We are faced with the longstanding problem of how to make use of a potentially large array of parallel processors, but with the new constraint that the individual elements are the system are inherently unreliable.
The human brain remains as one of the great frontiers of science – how does this organ upon which we all depend so critically actually do its job? A great deal is known about the underlying technology – the neuron – and we can observe large-scale brain activity through techniques such as magnetic resonance imaging, but this knowledge barely starts to tell us how the brain works. Something is happening at the intermediate levels of processing that we have yet to begin to understand, but the essence of the brain's massively-parallel information processing capabilities and robustness to component failure lies in these intermediate levels.
These two issues draws us towards two high-level research questions:
• Can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computing?
• Can massively parallel computing resources accelerate our understanding of brain function"
Max ERC Funding
2 399 761 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym CA2PVM
Project Multi-field and multi-scale Computational Approach to design and durability of PhotoVoltaic Modules
Researcher (PI) Marco Paggi
Host Institution (HI) SCUOLA IMT (ISTITUZIONI, MERCATI, TECNOLOGIE) ALTI STUDI DI LUCCA
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary "Photovoltaics (PV) based on Silicon (Si) semiconductors is one the most growing technology in the World for renewable, sustainable, non-polluting, widely available clean energy sources. Theoretical and applied research aims at increasing the conversion efficiency of PV modules and their lifetime. The Si crystalline microstructure has an important role on both issues. Grain boundaries introduce additional resistance and reduce the conversion efficiency. Moreover, they are prone to microcracking, thus influencing the lifetime. At present, the existing standard qualification tests are not sufficient to provide a quantitative definition of lifetime, since all the possible failure mechanisms are not accounted for. In this proposal, an innovative computational approach to design and durability assessment of PV modules is put forward. The aim is to complement real tests by virtual (numerical) simulations. To achieve a predictive stage, a challenging multi-field (multi-physics) computational approach is proposed, coupling the nonlinear elastic field, the thermal field and the electric field. To model real PV modules, an adaptive multi-scale and multi-field strategy will be proposed by introducing error indicators based on the gradients of the involved fields. This numerical approach will be applied to determine the upper bound to the probability of failure of the system. This statistical assessment will involve an optimization analysis that will be efficiently handled by a Mathematica-based hybrid symbolic-numerical framework. Standard and non-standard experimental testing on Si cells and PV modules will also be performed to complement and validate the numerical approach. The new methodology based on the challenging integration of advanced physical and mathematical modelling, innovative computational methods and non-standard experimental techniques is expected to have a significant impact on the design, qualification and lifetime assessment of complex PV systems."
Summary
"Photovoltaics (PV) based on Silicon (Si) semiconductors is one the most growing technology in the World for renewable, sustainable, non-polluting, widely available clean energy sources. Theoretical and applied research aims at increasing the conversion efficiency of PV modules and their lifetime. The Si crystalline microstructure has an important role on both issues. Grain boundaries introduce additional resistance and reduce the conversion efficiency. Moreover, they are prone to microcracking, thus influencing the lifetime. At present, the existing standard qualification tests are not sufficient to provide a quantitative definition of lifetime, since all the possible failure mechanisms are not accounted for. In this proposal, an innovative computational approach to design and durability assessment of PV modules is put forward. The aim is to complement real tests by virtual (numerical) simulations. To achieve a predictive stage, a challenging multi-field (multi-physics) computational approach is proposed, coupling the nonlinear elastic field, the thermal field and the electric field. To model real PV modules, an adaptive multi-scale and multi-field strategy will be proposed by introducing error indicators based on the gradients of the involved fields. This numerical approach will be applied to determine the upper bound to the probability of failure of the system. This statistical assessment will involve an optimization analysis that will be efficiently handled by a Mathematica-based hybrid symbolic-numerical framework. Standard and non-standard experimental testing on Si cells and PV modules will also be performed to complement and validate the numerical approach. The new methodology based on the challenging integration of advanced physical and mathematical modelling, innovative computational methods and non-standard experimental techniques is expected to have a significant impact on the design, qualification and lifetime assessment of complex PV systems."
Max ERC Funding
1 483 980 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym COMET
Project foundations of COmputational similarity geoMETtry
Researcher (PI) Michael Bronstein
Host Institution (HI) UNIVERSITA DELLA SVIZZERA ITALIANA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Similarity is one of the most fundamental notions encountered in problems practically in every branch of science, and is especially crucial in image sciences such as computer vision and pattern recognition. The need to quantify similarity or dissimilarity of some data is central to broad categories of problems involving comparison, search, matching, alignment, or reconstruction. The most common way to model a similarity is using metrics (distances). Such constructions are well-studied in the field of metric geometry, and there exist numerous computational algorithms allowing, for example, to represent one metric using another by means of isometric embeddings.
However, in many applications such a model appears to be too restrictive: many types of similarity are non-metric; it is not always possible to model the similarity precisely or completely e.g. due to missing data; some objects might be mutually incomparable e.g. if they are coming from different modalities. Such deficiencies of the metric similarity model are especially pronounced in large-scale computer vision, pattern recognition, and medical imaging applications.
The ambitious goal of this project is to introduce a paradigm shift in the way we model and compute similarity. We will develop a unifying framework of computational similarity geometry that extends the theoretical metric model, and will allow developing efficient numerical and computational tools for the representation and computation of generic similarity models. The methods will be developed all the way from mathematical concepts to efficiently implemented code and will be applied to today’s most important and challenging problems in Internet-scale computer vision and pattern recognition, shape analysis, and medical imaging."
Summary
"Similarity is one of the most fundamental notions encountered in problems practically in every branch of science, and is especially crucial in image sciences such as computer vision and pattern recognition. The need to quantify similarity or dissimilarity of some data is central to broad categories of problems involving comparison, search, matching, alignment, or reconstruction. The most common way to model a similarity is using metrics (distances). Such constructions are well-studied in the field of metric geometry, and there exist numerous computational algorithms allowing, for example, to represent one metric using another by means of isometric embeddings.
However, in many applications such a model appears to be too restrictive: many types of similarity are non-metric; it is not always possible to model the similarity precisely or completely e.g. due to missing data; some objects might be mutually incomparable e.g. if they are coming from different modalities. Such deficiencies of the metric similarity model are especially pronounced in large-scale computer vision, pattern recognition, and medical imaging applications.
The ambitious goal of this project is to introduce a paradigm shift in the way we model and compute similarity. We will develop a unifying framework of computational similarity geometry that extends the theoretical metric model, and will allow developing efficient numerical and computational tools for the representation and computation of generic similarity models. The methods will be developed all the way from mathematical concepts to efficiently implemented code and will be applied to today’s most important and challenging problems in Internet-scale computer vision and pattern recognition, shape analysis, and medical imaging."
Max ERC Funding
1 495 020 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym COMP-DES-MAT
Project Advanced tools for computational design of engineering materials
Researcher (PI) Francisco Javier (Xavier) Oliver Olivella
Host Institution (HI) CENTRE INTERNACIONAL DE METODES NUMERICS EN ENGINYERIA
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary The overall goal of the project is to contribute to the consolidation of the nascent and revolutionary philosophy of “Materials by Design” by resorting to the enormous power provided by the nowadays-available computational techniques. Limitations of current procedures for developing material-based innovative technologies in engineering, are often made manifest; many times only a catalog, or a data basis, of materials is available and these new technologies have to adapt to them, in the same way that the users of ready-to-wear have to take from the shop the costume that fits them better, but not the one that fits them properly. This constitutes an enormous limitation for the intended goals and scope. Certainly, availability of materials specifically designed by goal-oriented methods could eradicate that limitation, but this purpose faces the bounds of experimental procedures of material design, commonly based on trial and error procedures.
Computational mechanics, with the emerging Computational Materials Design (CMD) research field, has much to offer in this respect. The increasing power of the new computer processors and, most importantly, development of new methods and strategies of computational simulation, opens new ways to face the problem. The project intends breaking through the barriers that presently hinder the development and application of computational materials design, by means of the synergic exploration and development of three supplementary families of methods: 1) computational multiscale material modeling (CMM) based on the bottom-up, one-way coupled, description of the material structure in different representative scales, 2) development of a new generation of high performance reduced-order-modeling techniques (HP-ROM), in order to bring down the associated computational costs to affordable levels, and 3) new computational strategies and methods for the optimal design of the material meso/micro structure arrangement and topology (MATO) .
Summary
The overall goal of the project is to contribute to the consolidation of the nascent and revolutionary philosophy of “Materials by Design” by resorting to the enormous power provided by the nowadays-available computational techniques. Limitations of current procedures for developing material-based innovative technologies in engineering, are often made manifest; many times only a catalog, or a data basis, of materials is available and these new technologies have to adapt to them, in the same way that the users of ready-to-wear have to take from the shop the costume that fits them better, but not the one that fits them properly. This constitutes an enormous limitation for the intended goals and scope. Certainly, availability of materials specifically designed by goal-oriented methods could eradicate that limitation, but this purpose faces the bounds of experimental procedures of material design, commonly based on trial and error procedures.
Computational mechanics, with the emerging Computational Materials Design (CMD) research field, has much to offer in this respect. The increasing power of the new computer processors and, most importantly, development of new methods and strategies of computational simulation, opens new ways to face the problem. The project intends breaking through the barriers that presently hinder the development and application of computational materials design, by means of the synergic exploration and development of three supplementary families of methods: 1) computational multiscale material modeling (CMM) based on the bottom-up, one-way coupled, description of the material structure in different representative scales, 2) development of a new generation of high performance reduced-order-modeling techniques (HP-ROM), in order to bring down the associated computational costs to affordable levels, and 3) new computational strategies and methods for the optimal design of the material meso/micro structure arrangement and topology (MATO) .
Max ERC Funding
2 372 973 €
Duration
Start date: 2013-02-01, End date: 2018-01-31
Project acronym ComplexiTE
Project An integrated multidisciplinary tissue engineering approach combining novel high-throughput screening and advanced methodologies to create complex biomaterials-stem cells constructs
Researcher (PI) Rui Luis Gonçalves Dos Reis
Host Institution (HI) UNIVERSIDADE DO MINHO
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary New developments on tissue engineering strategies should realize the complexity of tissue remodelling and the inter-dependency of many variables associated to stem cells and biomaterials interactions. ComplexiTE proposes an integrated approach to address such multiple factors in which different innovative methodologies are implemented, aiming at developing tissue-like substitutes with enhanced in vivo functionality. Several ground-breaking advances are expected to be achieved, including: i) improved methodologies for isolation and expansion of sub-populations of stem cells derived from not so explored sources such as adipose tissue and amniotic fluid; ii) radically new methods to monitor human stem cells behaviour in vivo; iii) new macromolecules isolated from renewable resources, especially from marine origin; iv) combinations of liquid volumes mingling biomaterials and distinct stem cells, generating hydrogel beads upon adequate cross-linking reactions; v) optimised culture of the produced beads in adequate 3D bioreactors and a novel selection method to sort the beads that show a (pre-defined) positive biological reading; vi) random 3D arrays validated by identifying the natural polymers and cells composing the positive beads; v) 2D arrays of selected hydrogel spots for brand new in vivo tests, in which each spot of the implanted chip may be evaluated within the living animal using adequate imaging methods; vi) new porous scaffolds of the best combinations formed by particles agglomeration or fiber-based rapid-prototyping. The ultimate goal of this proposal is to develop breakthrough research specifically focused on the above mentioned key issues and radically innovative approaches to produce and scale-up new tissue engineering strategies that are both industrially and clinically relevant, by mastering the inherent complexity associated to the correct selection among a great number of combinations of possible biomaterials, stem cells and culturing conditions.
Summary
New developments on tissue engineering strategies should realize the complexity of tissue remodelling and the inter-dependency of many variables associated to stem cells and biomaterials interactions. ComplexiTE proposes an integrated approach to address such multiple factors in which different innovative methodologies are implemented, aiming at developing tissue-like substitutes with enhanced in vivo functionality. Several ground-breaking advances are expected to be achieved, including: i) improved methodologies for isolation and expansion of sub-populations of stem cells derived from not so explored sources such as adipose tissue and amniotic fluid; ii) radically new methods to monitor human stem cells behaviour in vivo; iii) new macromolecules isolated from renewable resources, especially from marine origin; iv) combinations of liquid volumes mingling biomaterials and distinct stem cells, generating hydrogel beads upon adequate cross-linking reactions; v) optimised culture of the produced beads in adequate 3D bioreactors and a novel selection method to sort the beads that show a (pre-defined) positive biological reading; vi) random 3D arrays validated by identifying the natural polymers and cells composing the positive beads; v) 2D arrays of selected hydrogel spots for brand new in vivo tests, in which each spot of the implanted chip may be evaluated within the living animal using adequate imaging methods; vi) new porous scaffolds of the best combinations formed by particles agglomeration or fiber-based rapid-prototyping. The ultimate goal of this proposal is to develop breakthrough research specifically focused on the above mentioned key issues and radically innovative approaches to produce and scale-up new tissue engineering strategies that are both industrially and clinically relevant, by mastering the inherent complexity associated to the correct selection among a great number of combinations of possible biomaterials, stem cells and culturing conditions.
Max ERC Funding
2 320 000 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym COMUNEM
Project Computational Multiscale Neuron Mechanics
Researcher (PI) Antoine Guy Bernard Jerusalem
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary "The last few years have seen a growing interest for computational cell mechanics. This field encompasses different scales ranging from individual monomers, cytoskeleton constituents, up to the full cell. Its focus, fueled by the development of interdisciplinary collaborative efforts between engineering, computer science and biology, until recently relatively isolated, has allowed for important breakthroughs in biomedicine, bioengineering or even neurology. However, the natural “knowledge barrier” between fields often leads to the use of one numerical tool for one bioengineering application with a limited understanding of either the tool or the field of application itself. Few groups, to date, have the knowledge and expertise to properly avoid both pits. Within the computational mechanics realm, new methods aim at bridging scale and modeling techniques ranging from density functional theory up to continuum modeling on very large scale parallel supercomputers. To the best of the knowledge of the author, a thorough and comprehensive research campaign aiming at bridging scales from proteins to the cell level while including its interaction with its surrounding media/stimulus is yet to be done. Among all cells, neurons are at the heart of tremendous medical challenges (TBI, Alzheimer, etc.). In nearly all of these challenges, the intrinsic coupling between mechanical and chemical mechanisms in neuron is of drastic relevance. I thus propose here the development of a neuron model constituted of length-scale dedicated numerical techniques, adequately bridged together. As an illustration of its usability, the model will be used for two specific applications: neurite growth and electrical-chemical-mechanical coupling in neurons. This multiscale computational framework will ultimately be made available to the bio- medical community to enhance their knowledge on neuron deformation, growth, electrosignaling and thus, Alzheimer’s disease, cancer or TBI."
Summary
"The last few years have seen a growing interest for computational cell mechanics. This field encompasses different scales ranging from individual monomers, cytoskeleton constituents, up to the full cell. Its focus, fueled by the development of interdisciplinary collaborative efforts between engineering, computer science and biology, until recently relatively isolated, has allowed for important breakthroughs in biomedicine, bioengineering or even neurology. However, the natural “knowledge barrier” between fields often leads to the use of one numerical tool for one bioengineering application with a limited understanding of either the tool or the field of application itself. Few groups, to date, have the knowledge and expertise to properly avoid both pits. Within the computational mechanics realm, new methods aim at bridging scale and modeling techniques ranging from density functional theory up to continuum modeling on very large scale parallel supercomputers. To the best of the knowledge of the author, a thorough and comprehensive research campaign aiming at bridging scales from proteins to the cell level while including its interaction with its surrounding media/stimulus is yet to be done. Among all cells, neurons are at the heart of tremendous medical challenges (TBI, Alzheimer, etc.). In nearly all of these challenges, the intrinsic coupling between mechanical and chemical mechanisms in neuron is of drastic relevance. I thus propose here the development of a neuron model constituted of length-scale dedicated numerical techniques, adequately bridged together. As an illustration of its usability, the model will be used for two specific applications: neurite growth and electrical-chemical-mechanical coupling in neurons. This multiscale computational framework will ultimately be made available to the bio- medical community to enhance their knowledge on neuron deformation, growth, electrosignaling and thus, Alzheimer’s disease, cancer or TBI."
Max ERC Funding
1 128 960 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym CREATIV
Project Creating Co-Adaptive Human-Computer Partnerships
Researcher (PI) Wendy Mackay
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary "CREATIV explores how the concept of co-adaptation can revolutionize the design and use of interactive software. Co-adaptation is the
parallel phenomenon in which users both adapt their behavior to the system’s constraints, learning its power and idiosyncrasies, and
appropriate the system for their own needs, often using it in ways unintended by the system designer.
A key insight in designing for co-adaptation is that we can encapsulate interactions and treat them as first class objects, called interaction
instruments This lets us focus on the specific characteristics of how human users express their intentions, both learning from and
controlling the system. By making instruments co-adaptive, we can radically change how people use interactive systems, providing
incrementally learnable paths that offer users greater expressive power and mastery of their technology.
The project offers theoretical, technical and empirical contributions. CREATIV will develop a novel architecture and generative principles for
creating co-adaptive instruments. The multi-disciplinary design team includes computer scientists, social scientists and designers as well
as ‘extreme users’, creative professionals who push the limits of their technology. Using participatory design techniques, we will articulate
the design space for co-adaptive instruments and build a series of prototypes. Evaluation activities include qualitative and quantitative
studies, in the lab and in the field, to test hypotheses and assess the success of the prototypes.
The initial goal of the CREATIV project is to fundamentally improve the learning and expressive capabilities of advanced users of creative
software, offering significantly enhanced methods for expressing and exploring their ideas. The ultimate goal is to radically transform
interactive systems for everyone by creating a powerful and flexible partnership between human users and interactive technology."
Summary
"CREATIV explores how the concept of co-adaptation can revolutionize the design and use of interactive software. Co-adaptation is the
parallel phenomenon in which users both adapt their behavior to the system’s constraints, learning its power and idiosyncrasies, and
appropriate the system for their own needs, often using it in ways unintended by the system designer.
A key insight in designing for co-adaptation is that we can encapsulate interactions and treat them as first class objects, called interaction
instruments This lets us focus on the specific characteristics of how human users express their intentions, both learning from and
controlling the system. By making instruments co-adaptive, we can radically change how people use interactive systems, providing
incrementally learnable paths that offer users greater expressive power and mastery of their technology.
The project offers theoretical, technical and empirical contributions. CREATIV will develop a novel architecture and generative principles for
creating co-adaptive instruments. The multi-disciplinary design team includes computer scientists, social scientists and designers as well
as ‘extreme users’, creative professionals who push the limits of their technology. Using participatory design techniques, we will articulate
the design space for co-adaptive instruments and build a series of prototypes. Evaluation activities include qualitative and quantitative
studies, in the lab and in the field, to test hypotheses and assess the success of the prototypes.
The initial goal of the CREATIV project is to fundamentally improve the learning and expressive capabilities of advanced users of creative
software, offering significantly enhanced methods for expressing and exploring their ideas. The ultimate goal is to radically transform
interactive systems for everyone by creating a powerful and flexible partnership between human users and interactive technology."
Max ERC Funding
2 458 996 €
Duration
Start date: 2013-06-01, End date: 2018-05-31
Project acronym CV-SUPER
Project Computer Vision for Scene Understanding from a first-person Perspective
Researcher (PI) Bastian Leibe
Host Institution (HI) RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "The goal of CV-SUPER is to create the technology to perform dynamic visual scene understanding from the perspective of a moving human observer. Briefly stated, we want to enable computers to see and understand what humans see when they navigate their way through busy inner-city locations. Our target scenario is dynamic visual scene understanding in public spaces, such as pedestrian zones, shopping malls, or other locations primarily designed for humans. CV-SUPER will develop computer vision algorithms that can observe the people populating those spaces, interpret and understand their actions and their interactions with other people and inanimate objects, and from this understanding derive predictions of their future behaviors within the next few seconds. In addition, we will develop methods to infer semantic properties of the observed environment and learn to recognize how those affect people’s actions. Supporting those tasks, we will develop a novel design of an object recognition system that scales up to potentially hundreds of categories. Finally, we will bind all those components together in a dynamic 3D world model, showing the world’s current state and facilitating predictions how this state will most likely change within the next few seconds. These are crucial capabilities for the creation of technical systems that may one day assist humans in their daily lives within such busy spaces, e.g., in the form of personal assistance devices for elderly or visually impaired people or in the form of future generations of mobile service robots and intelligent vehicles."
Summary
"The goal of CV-SUPER is to create the technology to perform dynamic visual scene understanding from the perspective of a moving human observer. Briefly stated, we want to enable computers to see and understand what humans see when they navigate their way through busy inner-city locations. Our target scenario is dynamic visual scene understanding in public spaces, such as pedestrian zones, shopping malls, or other locations primarily designed for humans. CV-SUPER will develop computer vision algorithms that can observe the people populating those spaces, interpret and understand their actions and their interactions with other people and inanimate objects, and from this understanding derive predictions of their future behaviors within the next few seconds. In addition, we will develop methods to infer semantic properties of the observed environment and learn to recognize how those affect people’s actions. Supporting those tasks, we will develop a novel design of an object recognition system that scales up to potentially hundreds of categories. Finally, we will bind all those components together in a dynamic 3D world model, showing the world’s current state and facilitating predictions how this state will most likely change within the next few seconds. These are crucial capabilities for the creation of technical systems that may one day assist humans in their daily lives within such busy spaces, e.g., in the form of personal assistance devices for elderly or visually impaired people or in the form of future generations of mobile service robots and intelligent vehicles."
Max ERC Funding
1 499 960 €
Duration
Start date: 2012-11-01, End date: 2017-10-31
Project acronym DAMREG
Project Pushing the Frontier of Brittlness
Damage Resistant Glasses
Researcher (PI) Tanguy Gilles Michel Rouxel
Host Institution (HI) UNIVERSITE DE RENNES I
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary "In order to improve the strength of a glass part (flat display, window, lens, fiber, etc.), most investigations so far were devoted to thermal and chemical surface treatments aimed at generating compressive stresses at the surface. The DAMREG project focuses on the incidence of the glass composition and atomic network structure on the mechanical properties, and specifically on the cracking and fracture behavior, and is based on the experience and expertise of the PI on the structure-property relationships in glass science. This project proposes to address the fundamental issue of glass brittleness in a new paradigm of thinking, questioning the usefulness of the standard fracture toughness parameter, with emphasis on the surface flaw generation process (multiscale approach), and aims at determining novel routes to improve the mechanical performance of glass further promoting innovative applications. DAMREG involves revisiting the fundamental fracture mechanics concepts, the preparation of novel glass compositions, and nanoscale physico-chemical and mechanical characterization. So far most glass fracture studies focused on the crack tip behavior, and were limited to vitreous silica. A crack acts as a lever arm for the stress so that the singular stress at the tip is proportional to the crack length and inversely proportional to the square-root of the tip radius (provided this has a meaning). Since a crack can hardly be cured or shielded at ambient, the presence of a sharp crack is already detrimental. On the contrary to this approach, DAMREG is aimed at understanding the crack initiation process, and the main objective is to define some roadmap to design glasses (composition, thermo-mechanical treatments etc.) with better damage (initiation) resistance."
Summary
"In order to improve the strength of a glass part (flat display, window, lens, fiber, etc.), most investigations so far were devoted to thermal and chemical surface treatments aimed at generating compressive stresses at the surface. The DAMREG project focuses on the incidence of the glass composition and atomic network structure on the mechanical properties, and specifically on the cracking and fracture behavior, and is based on the experience and expertise of the PI on the structure-property relationships in glass science. This project proposes to address the fundamental issue of glass brittleness in a new paradigm of thinking, questioning the usefulness of the standard fracture toughness parameter, with emphasis on the surface flaw generation process (multiscale approach), and aims at determining novel routes to improve the mechanical performance of glass further promoting innovative applications. DAMREG involves revisiting the fundamental fracture mechanics concepts, the preparation of novel glass compositions, and nanoscale physico-chemical and mechanical characterization. So far most glass fracture studies focused on the crack tip behavior, and were limited to vitreous silica. A crack acts as a lever arm for the stress so that the singular stress at the tip is proportional to the crack length and inversely proportional to the square-root of the tip radius (provided this has a meaning). Since a crack can hardly be cured or shielded at ambient, the presence of a sharp crack is already detrimental. On the contrary to this approach, DAMREG is aimed at understanding the crack initiation process, and the main objective is to define some roadmap to design glasses (composition, thermo-mechanical treatments etc.) with better damage (initiation) resistance."
Max ERC Funding
1 821 596 €
Duration
Start date: 2013-06-01, End date: 2018-05-31
Project acronym DEEPSEA
Project Parallelism and Beyond: Dynamic Parallel Computation for Efficiency and High Performance
Researcher (PI) Umut Acar
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary We propose to radically extend the frontiers of two major themes in
computing, parallelism and dynamism, and develop a novel paradigm of
computing: dynamic-parallelism. To this end, we will follow two
lines of research. First, we will develop techniques for extracting
efficiency and high performance from parallel programs written in
high-level programming languages. Second, we will develop the
dynamic-parallelism model, where computations can respond to a wide
variety of dynamic changes to their data automatically and
efficiently, by developing novel abstractions (calculi), high-level
programming-language constructs, and compilation techniques. The
research will culminate in a language that extends the C programming
language with support for parallel and dynamic-parallel programming.
The proposal is motivated by urgent needs driven by the advent of
multicore chips, which is making parallelism mainstream, and the
increasing ubiquity of software, which requires applications to
operate on highly dynamic data. These advances demand parallel and
highly dynamic software, which remains too difficult and labor
intensive to develop. The urgency is further underlined by the
increasing data and problem sizes---online data grows
exponentially, doubling every few years---that require similarly
powerful advances in performance.
The proposal will achieve profound impact by dramatically simplifying
the development of high-performing dynamic and dynamic-parallel
software. As a result, programmer productivity and software quality
including correctness, reliability, performance, and resource (e.g.,
time and energy) consumption will improve significantly. The proposal
will not only open new research opportunities in parallel computing,
programming languages, and compilers, but also in other fields where
parallel and dynamic problems abound, e.g., algorithms, computational
biology, geometry, graphics, machine learning, and software systems.
Summary
We propose to radically extend the frontiers of two major themes in
computing, parallelism and dynamism, and develop a novel paradigm of
computing: dynamic-parallelism. To this end, we will follow two
lines of research. First, we will develop techniques for extracting
efficiency and high performance from parallel programs written in
high-level programming languages. Second, we will develop the
dynamic-parallelism model, where computations can respond to a wide
variety of dynamic changes to their data automatically and
efficiently, by developing novel abstractions (calculi), high-level
programming-language constructs, and compilation techniques. The
research will culminate in a language that extends the C programming
language with support for parallel and dynamic-parallel programming.
The proposal is motivated by urgent needs driven by the advent of
multicore chips, which is making parallelism mainstream, and the
increasing ubiquity of software, which requires applications to
operate on highly dynamic data. These advances demand parallel and
highly dynamic software, which remains too difficult and labor
intensive to develop. The urgency is further underlined by the
increasing data and problem sizes---online data grows
exponentially, doubling every few years---that require similarly
powerful advances in performance.
The proposal will achieve profound impact by dramatically simplifying
the development of high-performing dynamic and dynamic-parallel
software. As a result, programmer productivity and software quality
including correctness, reliability, performance, and resource (e.g.,
time and energy) consumption will improve significantly. The proposal
will not only open new research opportunities in parallel computing,
programming languages, and compilers, but also in other fields where
parallel and dynamic problems abound, e.g., algorithms, computational
biology, geometry, graphics, machine learning, and software systems.
Max ERC Funding
1 076 570 €
Duration
Start date: 2013-06-01, End date: 2018-05-31
Project acronym DEPENDABLECLOUD
Project Towards the dependable cloud:
Building the foundations for tomorrow's dependable cloud computing
Researcher (PI) Rodrigo Seromenho Miragaia Rodrigues
Host Institution (HI) INESC ID - INSTITUTO DE ENGENHARIADE SISTEMAS E COMPUTADORES, INVESTIGACAO E DESENVOLVIMENTO EM LISBOA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Cloud computing is being increasingly adopted by individuals, organizations, and governments. However, as the computations that are offloaded to the cloud expand to societal-critical services, the dependability requirements of cloud services become much higher, and we need to ensure that the infrastructure that supports these services is ready to meet these requirements. In particular, this proposal tackles the challenges that arise from two distinctive characteristic of the cloud infrastructure.
The first is that non-crash faults, despite being considered highly unlikely by the designers of traditional systems, become commonplace at the scale and complexity of the cloud infrastructure. We argue that the current ad-hoc methods for handling these faults are insufficient, and that the only principled approach of assuming Byzantine faults is too pessimistic. Therefore, we call for a new systematic approach to tolerating non-crash, non-adversarial faults. This requires the definition of a new fault model, and the construction of a series of building blocks and key protocol elements that enable the construction of fault-tolerant cloud services.
The second issue is that to meet their scalability requirements, cloud services spread their state across multiple data centers, and direct users to the closest one. This raises the issue that not all operations can be executed optimistically, without being aware of concurrent operations over the same data, and thus multiple levels of consistency must coexist. However, this puts the onus of reasoning about which behaviors are allowed under such a hybrid consistency model on the programmer of the service. We propose a systematic solution to this problem, which includes a novel consistency model that allows for developing highly scalable services that are fast when possible and consistent when necessary, and a labeling methodology to guide the programmer in deciding which operations can run at each consistency level.
Summary
Cloud computing is being increasingly adopted by individuals, organizations, and governments. However, as the computations that are offloaded to the cloud expand to societal-critical services, the dependability requirements of cloud services become much higher, and we need to ensure that the infrastructure that supports these services is ready to meet these requirements. In particular, this proposal tackles the challenges that arise from two distinctive characteristic of the cloud infrastructure.
The first is that non-crash faults, despite being considered highly unlikely by the designers of traditional systems, become commonplace at the scale and complexity of the cloud infrastructure. We argue that the current ad-hoc methods for handling these faults are insufficient, and that the only principled approach of assuming Byzantine faults is too pessimistic. Therefore, we call for a new systematic approach to tolerating non-crash, non-adversarial faults. This requires the definition of a new fault model, and the construction of a series of building blocks and key protocol elements that enable the construction of fault-tolerant cloud services.
The second issue is that to meet their scalability requirements, cloud services spread their state across multiple data centers, and direct users to the closest one. This raises the issue that not all operations can be executed optimistically, without being aware of concurrent operations over the same data, and thus multiple levels of consistency must coexist. However, this puts the onus of reasoning about which behaviors are allowed under such a hybrid consistency model on the programmer of the service. We propose a systematic solution to this problem, which includes a novel consistency model that allows for developing highly scalable services that are fast when possible and consistent when necessary, and a labeling methodology to guide the programmer in deciding which operations can run at each consistency level.
Max ERC Funding
1 076 084 €
Duration
Start date: 2012-10-01, End date: 2018-01-31
Project acronym DISCOTEX
Project Distributional Compositional Semantics for Text Processing
Researcher (PI) Stephen Clark
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "The notion of meaning is central to many areas of Computer Science, Artificial Intelligence (AI), Linguistics, Philosophy, and Cognitive Science. A formal account of the meaning of natural language utterances is crucial to AI, since an understanding of natural language is at the heart of much intelligent behaviour. More specifically, Natural Language Processing (NLP) --- the branch of AI concerned with the automatic processing, analysis and generation of text --- requires a model of meaning for many of its tasks and applications.
There have been two main approaches to modelling the meaning of language in NLP. The first, the ``compositional"" approach, is based on classical ideas from Philosophy and Mathematical Logic, and includes formal accounts of how the meaning of a sentence can be determined from the relations of words in a sentence. The second, more recent approach focuses on the meanings of the words themselves. This is the ``distributional"" approach to lexical semantics and is based on the idea that the meanings of words can be determined by considering the contexts in which words appear in text.
The ambitious idea in this proposal is to exploit the strengths of the two approaches, by developing a unified model of distributional and compositional semantics, and exploiting it for NLP tasks and
applications. The aim is to make the following fundamental contributions:
1. advance the theoretical study of meaning in Linguistics, Computer Science and AI;
2. develop new meaning-sensitive approaches to NLP applications which can be robustly applied to naturally occurring text.
The claim is that language technology based on ``shallow"" approaches is reaching its performance limit, and the next generation of language technology requires a more sophisticated, but robust, model of meaning, which this project will provide."
Summary
"The notion of meaning is central to many areas of Computer Science, Artificial Intelligence (AI), Linguistics, Philosophy, and Cognitive Science. A formal account of the meaning of natural language utterances is crucial to AI, since an understanding of natural language is at the heart of much intelligent behaviour. More specifically, Natural Language Processing (NLP) --- the branch of AI concerned with the automatic processing, analysis and generation of text --- requires a model of meaning for many of its tasks and applications.
There have been two main approaches to modelling the meaning of language in NLP. The first, the ``compositional"" approach, is based on classical ideas from Philosophy and Mathematical Logic, and includes formal accounts of how the meaning of a sentence can be determined from the relations of words in a sentence. The second, more recent approach focuses on the meanings of the words themselves. This is the ``distributional"" approach to lexical semantics and is based on the idea that the meanings of words can be determined by considering the contexts in which words appear in text.
The ambitious idea in this proposal is to exploit the strengths of the two approaches, by developing a unified model of distributional and compositional semantics, and exploiting it for NLP tasks and
applications. The aim is to make the following fundamental contributions:
1. advance the theoretical study of meaning in Linguistics, Computer Science and AI;
2. develop new meaning-sensitive approaches to NLP applications which can be robustly applied to naturally occurring text.
The claim is that language technology based on ``shallow"" approaches is reaching its performance limit, and the next generation of language technology requires a more sophisticated, but robust, model of meaning, which this project will provide."
Max ERC Funding
1 087 930 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym ECAP
Project Efficient Cryptographic Arguments and Proofs
Researcher (PI) Jens Groth
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Privacy and verifiability are fundamental security goals that often conflict with each other. In elections we want to verify that the final tally is correct without violating the voters’ privacy; companies are audited but do not want financial statements to disclose the details of their business strategies; people identifying themselves do not want their personal information to be abused in identity theft, etc.
Zero-knowledge proofs allow the verification of facts with minimal privacy loss. More precisely, a zero-knowledge proof is a protocol that allows a prover to convince a verifier about the truth of a statement in a manner that does not disclose any other information. The ability to combine verification and privacy makes zero-knowledge proofs extremely useful; they are used in numerous cryptographic protocols.
The purpose of this proposal is to establish a research group dedicated to the study of zero-knowledge proofs. A main focus of the group will be to improve efficiency. Zero-knowledge proofs can be very complex and in many security applications the zero-knowledge proofs are the main performance bottleneck. This leads to a significant cost in terms of time and money; or if the cost is too high it may force users to use insecure schemes without zero-knowledge proofs.
Our vision will be to reduce the cost of zero-knowledge proofs so much that instead of being expensive protocols components they become so cheap that their cost is insignificant compared to other protocol components. This will make existing cryptographic protocols that rely on zero-knowledge proofs faster and also broaden the range of security applications where zero-knowledge proofs can be used."
Summary
"Privacy and verifiability are fundamental security goals that often conflict with each other. In elections we want to verify that the final tally is correct without violating the voters’ privacy; companies are audited but do not want financial statements to disclose the details of their business strategies; people identifying themselves do not want their personal information to be abused in identity theft, etc.
Zero-knowledge proofs allow the verification of facts with minimal privacy loss. More precisely, a zero-knowledge proof is a protocol that allows a prover to convince a verifier about the truth of a statement in a manner that does not disclose any other information. The ability to combine verification and privacy makes zero-knowledge proofs extremely useful; they are used in numerous cryptographic protocols.
The purpose of this proposal is to establish a research group dedicated to the study of zero-knowledge proofs. A main focus of the group will be to improve efficiency. Zero-knowledge proofs can be very complex and in many security applications the zero-knowledge proofs are the main performance bottleneck. This leads to a significant cost in terms of time and money; or if the cost is too high it may force users to use insecure schemes without zero-knowledge proofs.
Our vision will be to reduce the cost of zero-knowledge proofs so much that instead of being expensive protocols components they become so cheap that their cost is insignificant compared to other protocol components. This will make existing cryptographic protocols that rely on zero-knowledge proofs faster and also broaden the range of security applications where zero-knowledge proofs can be used."
Max ERC Funding
1 346 074 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym ELASTIC-TURBULENCE
Project Purely-elastic flow instabilities and transition to elastic turbulence in microscale flows of complex fluids
Researcher (PI) Manuel António Moreira Alves
Host Institution (HI) UNIVERSIDADE DO PORTO
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Flows of complex fluids, such as many biological fluids and most synthetic fluids, are common in our daily life and are very important from an industrial perspective. Because of their inherent nonlinearity, the flow of complex viscoelastic fluids often leads to counterintuitive and complex behaviour and, above critical conditions, can prompt flow instabilities even under low Reynolds number conditions which are entirely absent in the corresponding Newtonian fluid flows.
The primary goal of this project is to substantially expand the frontiers of our current knowledge regarding the mechanisms that lead to the development of such purely-elastic flow instabilities, and ultimately to understand the transition to so-called “elastic turbulence”, a turbulent-like phenomenon which can arise even under inertialess flow conditions. This is an extremely challenging problem, and to significantly advance our knowledge in such important flows these instabilities will be investigated in a combined manner encompassing experiments, theory and numerical simulations. Such a holistic approach will enable us to understand the underlying mechanisms of those instabilities and to develop accurate criteria for their prediction far in advance of what we could achieve with either approach separately. A deep understanding of the mechanisms generating elastic instabilities and subsequent transition to elastic turbulence is crucial from a fundamental point of view and for many important practical applications involving engineered complex fluids, such as the design of microfluidic mixers for efficient operation under inertialess flow conditions, or the development of highly efficient micron-sized energy management and mass transfer systems.
This research proposal will create a solid basis for the establishment of an internationally-leading research group led by the PI studying flow instabilities and elastic turbulence in complex fluid flows.
Summary
Flows of complex fluids, such as many biological fluids and most synthetic fluids, are common in our daily life and are very important from an industrial perspective. Because of their inherent nonlinearity, the flow of complex viscoelastic fluids often leads to counterintuitive and complex behaviour and, above critical conditions, can prompt flow instabilities even under low Reynolds number conditions which are entirely absent in the corresponding Newtonian fluid flows.
The primary goal of this project is to substantially expand the frontiers of our current knowledge regarding the mechanisms that lead to the development of such purely-elastic flow instabilities, and ultimately to understand the transition to so-called “elastic turbulence”, a turbulent-like phenomenon which can arise even under inertialess flow conditions. This is an extremely challenging problem, and to significantly advance our knowledge in such important flows these instabilities will be investigated in a combined manner encompassing experiments, theory and numerical simulations. Such a holistic approach will enable us to understand the underlying mechanisms of those instabilities and to develop accurate criteria for their prediction far in advance of what we could achieve with either approach separately. A deep understanding of the mechanisms generating elastic instabilities and subsequent transition to elastic turbulence is crucial from a fundamental point of view and for many important practical applications involving engineered complex fluids, such as the design of microfluidic mixers for efficient operation under inertialess flow conditions, or the development of highly efficient micron-sized energy management and mass transfer systems.
This research proposal will create a solid basis for the establishment of an internationally-leading research group led by the PI studying flow instabilities and elastic turbulence in complex fluid flows.
Max ERC Funding
994 110 €
Duration
Start date: 2012-10-01, End date: 2018-01-31
Project acronym EQUALIS
Project EQualIS : Enhancing the Quality of Interacting Systems
Researcher (PI) Patricia Bouyer-Decitre
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary The ubiquitous use of computerized systems, and their increasing
complexity, demand formal evidences of their correctness. While
current formal-verification techniques have already been applied to a
number of case studies, they are not sufficient yet to fully analyze
several aspects of complex systems such as communication networks,
embedded systems or industrial controllers. There are three important
characteristics of these systems which need to be tackled:
- the rich interaction that crucially constrains the behaviour of
such systems is poorly taken into account in the actual models;
- the imprecisions and uncertainty inherent to systems that are
implemented (e.g. on a digital processor), or which interact via a
network, or which control physical equipments, are mostly ignored
by the verification process;
- the deployment of large interacting systems emphasizes the lack for
a modular approach to the synthesis of systems.
The goal of this project is to develop a systematic approach to the
formal analysis of interacting systems. We will use models from game
theory for properly taking into account the interaction in those
systems, and will propose quantitative measures of correctness and
quality, that will take into account the possible perturbations in the
systems. The core of the project will be the development of various
algorithms for synthesizing high-quality interactive systems. We will
be particularly attached to the modularity of the approach and to the
development of efficient algorithms. The EQualIS project will deeply
impact the design and verification of interacting systems, by
providing a rich framework, that will increase our confidence in the
analysis of such systems.
Summary
The ubiquitous use of computerized systems, and their increasing
complexity, demand formal evidences of their correctness. While
current formal-verification techniques have already been applied to a
number of case studies, they are not sufficient yet to fully analyze
several aspects of complex systems such as communication networks,
embedded systems or industrial controllers. There are three important
characteristics of these systems which need to be tackled:
- the rich interaction that crucially constrains the behaviour of
such systems is poorly taken into account in the actual models;
- the imprecisions and uncertainty inherent to systems that are
implemented (e.g. on a digital processor), or which interact via a
network, or which control physical equipments, are mostly ignored
by the verification process;
- the deployment of large interacting systems emphasizes the lack for
a modular approach to the synthesis of systems.
The goal of this project is to develop a systematic approach to the
formal analysis of interacting systems. We will use models from game
theory for properly taking into account the interaction in those
systems, and will propose quantitative measures of correctness and
quality, that will take into account the possible perturbations in the
systems. The core of the project will be the development of various
algorithms for synthesizing high-quality interactive systems. We will
be particularly attached to the modularity of the approach and to the
development of efficient algorithms. The EQualIS project will deeply
impact the design and verification of interacting systems, by
providing a rich framework, that will increase our confidence in the
analysis of such systems.
Max ERC Funding
1 497 431 €
Duration
Start date: 2013-01-01, End date: 2019-02-28
Project acronym FLEXABLE
Project Deformable Multiple-View Geometry and 3D Reconstruction, with Application to Minimally Invasive Surgery
Researcher (PI) Adrien Bartoli
Host Institution (HI) UNIVERSITE CLERMONT AUVERGNE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Project FLEXABLE lies in the field of 3D Computer Vision, which seeks to recover depth or the 3D shape of the observed environment from images. One of the most successful and mature techniques in 3D Computer Vision is Shape-from-Motion which is based on the well-established theory of Multiple-View Geometry. This uses multiple images and assumes that the environment is rigid.
The world is however made of objects which move and undergo deformations. Researchers have tried to extend Shape-from-Motion to a deformable environment for about a decade, yet with only very limited success to date. We believe that there are two main reasons for this. Firstly there is still a lack of a solid theory for Deformable Shape-from-Motion. Fundamental questions, such as what kinds of deformation can facilitate unambiguous 3D reconstruction, are not yet answered. Secondly practical solutions have not yet come about: for accurate and dense 3D shape results, the Motion cue must be combined with other visual cues, since it is certainly weaker in the deformable case. It may require strong object-specific priors, needing one to bridge the gap with object recognition.
This project develops these two key areas. It includes three main lines of research: theory, its computational implementation, and its real-world application. Deformable Multiple-View Geometry will generalize the existing rigid theory and will provide researchers with a rigorous mathematical framework that underpins the use of Motion as a proper visual cue for Deformable 3D Reconstruction. Our theory will require us to introduce new mathematical tools from differentiable projective manifolds. Our implementation will study and develop new computational means for solving the difficult inverse problems formulated in our theory. Finally, we will develop cutting-edge applications of our framework specific to Minimally Invasive Surgery, for which there is a very high need for 3D computer vision.
Summary
Project FLEXABLE lies in the field of 3D Computer Vision, which seeks to recover depth or the 3D shape of the observed environment from images. One of the most successful and mature techniques in 3D Computer Vision is Shape-from-Motion which is based on the well-established theory of Multiple-View Geometry. This uses multiple images and assumes that the environment is rigid.
The world is however made of objects which move and undergo deformations. Researchers have tried to extend Shape-from-Motion to a deformable environment for about a decade, yet with only very limited success to date. We believe that there are two main reasons for this. Firstly there is still a lack of a solid theory for Deformable Shape-from-Motion. Fundamental questions, such as what kinds of deformation can facilitate unambiguous 3D reconstruction, are not yet answered. Secondly practical solutions have not yet come about: for accurate and dense 3D shape results, the Motion cue must be combined with other visual cues, since it is certainly weaker in the deformable case. It may require strong object-specific priors, needing one to bridge the gap with object recognition.
This project develops these two key areas. It includes three main lines of research: theory, its computational implementation, and its real-world application. Deformable Multiple-View Geometry will generalize the existing rigid theory and will provide researchers with a rigorous mathematical framework that underpins the use of Motion as a proper visual cue for Deformable 3D Reconstruction. Our theory will require us to introduce new mathematical tools from differentiable projective manifolds. Our implementation will study and develop new computational means for solving the difficult inverse problems formulated in our theory. Finally, we will develop cutting-edge applications of our framework specific to Minimally Invasive Surgery, for which there is a very high need for 3D computer vision.
Max ERC Funding
1 481 294 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym FSC
Project Fast and Sound Cryptography: From Theoretical Foundations to Practical Constructions
Researcher (PI) Alon Rosen
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Much currently deployed cryptography is designed using more “art'” than “science,” and most of the schemes used in practice lack rigorous justification for their security. While theoretically sound designs do exist, they tend to be quite a bit slower to run and hence are not realistic from a practical point of view. This gap is especially evident in “low-level” cryptographic primitives, which are the building blocks that ultimately process the largest quantities of data.
Recent years have witnessed dramatic progress in the understanding of highly-parallelizable (local) cryptography, and in the construction of schemes based on the mathematics of geometric objects called lattices. Besides being based on firm theoretical foundations, these schemes also allow for very efficient implementations, especially on modern microprocessors. Yet despite all this recent progress, there has not yet been a major effort specifically focused on bringing the efficiency of such constructions as close as possible to practicality; this project will do exactly that.
The main goal of the Fast and Sound Cryptography project is to develop new tools and techniques that would lead to practical and theoretically sound implementations of cryptographic primitives. We plan to draw ideas from both theory and practice, and expect their combination to generate new questions, conjectures, and insights. A considerable fraction of our efforts will be devoted to demonstrating the efficiency of our constructions. This will be achieved by a concrete setting of parameters, allowing for cryptanalysis and direct performance comparison to popular designs.
While our initial focus will be on low-level primitives, we expect our research to also have direct impact on the practical efficiency of higher-level cryptographic tasks. Indeed, many of the recent improvements in the efficiency of lattice-based public-key cryptography can be traced back to research on the efficiency of lattice-based hash functions."
Summary
"Much currently deployed cryptography is designed using more “art'” than “science,” and most of the schemes used in practice lack rigorous justification for their security. While theoretically sound designs do exist, they tend to be quite a bit slower to run and hence are not realistic from a practical point of view. This gap is especially evident in “low-level” cryptographic primitives, which are the building blocks that ultimately process the largest quantities of data.
Recent years have witnessed dramatic progress in the understanding of highly-parallelizable (local) cryptography, and in the construction of schemes based on the mathematics of geometric objects called lattices. Besides being based on firm theoretical foundations, these schemes also allow for very efficient implementations, especially on modern microprocessors. Yet despite all this recent progress, there has not yet been a major effort specifically focused on bringing the efficiency of such constructions as close as possible to practicality; this project will do exactly that.
The main goal of the Fast and Sound Cryptography project is to develop new tools and techniques that would lead to practical and theoretically sound implementations of cryptographic primitives. We plan to draw ideas from both theory and practice, and expect their combination to generate new questions, conjectures, and insights. A considerable fraction of our efforts will be devoted to demonstrating the efficiency of our constructions. This will be achieved by a concrete setting of parameters, allowing for cryptanalysis and direct performance comparison to popular designs.
While our initial focus will be on low-level primitives, we expect our research to also have direct impact on the practical efficiency of higher-level cryptographic tasks. Indeed, many of the recent improvements in the efficiency of lattice-based public-key cryptography can be traced back to research on the efficiency of lattice-based hash functions."
Max ERC Funding
1 498 214 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym GALATEA
Project Tailoring Material Properties Using Femtosecond Lasers: A New Paradigm for Highly Integrated Micro-/Nano- Scale Systems
Researcher (PI) Yves, Jérôme Bellouard
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Using recent progress in laser technology and in particular in the field of ultra-fast lasers, we are getting close to accomplish the alchemist dream of transforming materials. Compact lasers can generate pulses with ultra-high peak powers in the Tera-Watt or even Peta-Watt ranges. These high-power pulses lead to a radically different laser-matter interaction than the one obtained with conventional lasers. Non-linear multi-photons processes are observed; they open new and exciting opportunities to tailor the matter in its intimate structure with sub-wavelength spatial resolutions and in the three dimensions.
This project is aiming at exploring the use of these ultrafast lasers to locally tailor the physical properties of glass materials. More specifically, our objective is to create polymorphs embedded in bulk structures and to demonstrate their use as means to introduce new functionalities in the material.
The long-term objective is to develop the scientific understanding and technological know-how to create three-dimensional objects with nanoscale features where optics, fluidics and micromechanical elements as well as active functions are integrated in a single monolithic piece of glass and to do so using a single process.
This is a multidisciplinary research that pushes the frontier of our current knowledge of femtosecond laser interaction with glass to demonstrate a novel design platform for future micro-/nano- systems.
Summary
Using recent progress in laser technology and in particular in the field of ultra-fast lasers, we are getting close to accomplish the alchemist dream of transforming materials. Compact lasers can generate pulses with ultra-high peak powers in the Tera-Watt or even Peta-Watt ranges. These high-power pulses lead to a radically different laser-matter interaction than the one obtained with conventional lasers. Non-linear multi-photons processes are observed; they open new and exciting opportunities to tailor the matter in its intimate structure with sub-wavelength spatial resolutions and in the three dimensions.
This project is aiming at exploring the use of these ultrafast lasers to locally tailor the physical properties of glass materials. More specifically, our objective is to create polymorphs embedded in bulk structures and to demonstrate their use as means to introduce new functionalities in the material.
The long-term objective is to develop the scientific understanding and technological know-how to create three-dimensional objects with nanoscale features where optics, fluidics and micromechanical elements as well as active functions are integrated in a single monolithic piece of glass and to do so using a single process.
This is a multidisciplinary research that pushes the frontier of our current knowledge of femtosecond laser interaction with glass to demonstrate a novel design platform for future micro-/nano- systems.
Max ERC Funding
1 757 396 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym HELIOS
Project Towards Total Scene Understanding using Structured Models
Researcher (PI) Philip Hilaire Sean Torr
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary "This project is at the interface between computer vision and linguistics: the aim is to have an algorithm generate relevant sentences that describe a scene given one or more images.
Scene understanding has been one of the central goals in computer vision for many decades. It involves various individual tasks, such as object recognition, action understanding and 3D scene recovery. One simple definition of this task is to say scene understanding is equivalent to being able to generate meaningful natural language descriptions of a scene, an important problem in computational linguistics. Whilst even a child can do this with ease, the solution of this fundamental problem has remained elusive. This is because there has been a large amount of research in computer vision that is very deep, but not broad, leading to an in depth understanding of edge and feature detectors, tracking, camera calibration, projective geometry, segmentation, denoising, stereo methods, object detection etc. However, there has been only a limited amount of research on a framework for integrating these functional elements into a method for scene understanding.
Within this proposal I advocate a complete view of computer vision, in which the scene is dealt with as a whole, in which problems which are normally considered distinct by most researchers are unified into a common cost function or energy. I will discuss the form the energy should take and efficient algorithms for learning and inference. Our preliminary experiments indicate that such a unified treatment will lead to a paradigm shift in computer vision with a quantum leap in performance. We intend to build embodied demonstrators including a prosthetic vision aid to the visually impaired. The World Health Organization gives a figure of over 300 million such people world wide, which means that in addition to being transformative in the areas of linguistics, HCI, robotics, and computer vision, this work will have a massive impact world wide"
Summary
"This project is at the interface between computer vision and linguistics: the aim is to have an algorithm generate relevant sentences that describe a scene given one or more images.
Scene understanding has been one of the central goals in computer vision for many decades. It involves various individual tasks, such as object recognition, action understanding and 3D scene recovery. One simple definition of this task is to say scene understanding is equivalent to being able to generate meaningful natural language descriptions of a scene, an important problem in computational linguistics. Whilst even a child can do this with ease, the solution of this fundamental problem has remained elusive. This is because there has been a large amount of research in computer vision that is very deep, but not broad, leading to an in depth understanding of edge and feature detectors, tracking, camera calibration, projective geometry, segmentation, denoising, stereo methods, object detection etc. However, there has been only a limited amount of research on a framework for integrating these functional elements into a method for scene understanding.
Within this proposal I advocate a complete view of computer vision, in which the scene is dealt with as a whole, in which problems which are normally considered distinct by most researchers are unified into a common cost function or energy. I will discuss the form the energy should take and efficient algorithms for learning and inference. Our preliminary experiments indicate that such a unified treatment will lead to a paradigm shift in computer vision with a quantum leap in performance. We intend to build embodied demonstrators including a prosthetic vision aid to the visually impaired. The World Health Organization gives a figure of over 300 million such people world wide, which means that in addition to being transformative in the areas of linguistics, HCI, robotics, and computer vision, this work will have a massive impact world wide"
Max ERC Funding
2 493 495 €
Duration
Start date: 2014-01-01, End date: 2018-12-31
Project acronym HERMES
Project HERMES – High Exponential Rise in Miniaturized cantilever-like Sensing
Researcher (PI) Anja Boisen
Host Institution (HI) DANMARKS TEKNISKE UNIVERSITET
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary Miniaturized cantilever–like sensors have evolved rapidly. However, when it comes to major breakthroughs in both fundamental studies as well as commercial applications these sensors face severe challenges: i) reliability – often only one or two measurements are performed for the same conditions due to very slow data generation and the results are rarely confirmed by orthogonal sensing technologies, ii) sensitivity – in many applications the need is now for ultra-low sensitivities, iii) reproducibility – very few results have been reported on reproducibility of these sensors iv)throughput –extremely slow and tedious read-out technologies. In order to take a great leap forward in cantilever-like sensing I suggest a new generation of simplified and optimized cantilever-like sensing structures implemented in a DVD based platform which will specifically address these issues.
My overall hypothesis is that the true potential of these exciting sensors can only be released when using a simple and reliable read-out system that allows us to focus on the mechanical performance of the sensors. Thus we will keep the sensors as simple as possible. The DVD readout makes it possible to generate large amount of data and to focus on mechanics and the interplay between mechanics, optics and electrochemistry. It will be a technological challenge to realize a robust and reliable DVD platform, that facilitates optical read-out as well as actuation. The DVD platform will enable a fast and iterative development of hybrid cantilever-like systems which draw upon our more than 10 years experience in the field. These sensors will be realised using Si and polymer based cleanroom fabrication. Focus is on design, fabrication, characterization and applications of cantilever-like sensors and on DVD inspired system integration. By the end of HERMES we will have a unique platform which will be the onset of many new types of specific high –throughput applications and sensor development projects.
Summary
Miniaturized cantilever–like sensors have evolved rapidly. However, when it comes to major breakthroughs in both fundamental studies as well as commercial applications these sensors face severe challenges: i) reliability – often only one or two measurements are performed for the same conditions due to very slow data generation and the results are rarely confirmed by orthogonal sensing technologies, ii) sensitivity – in many applications the need is now for ultra-low sensitivities, iii) reproducibility – very few results have been reported on reproducibility of these sensors iv)throughput –extremely slow and tedious read-out technologies. In order to take a great leap forward in cantilever-like sensing I suggest a new generation of simplified and optimized cantilever-like sensing structures implemented in a DVD based platform which will specifically address these issues.
My overall hypothesis is that the true potential of these exciting sensors can only be released when using a simple and reliable read-out system that allows us to focus on the mechanical performance of the sensors. Thus we will keep the sensors as simple as possible. The DVD readout makes it possible to generate large amount of data and to focus on mechanics and the interplay between mechanics, optics and electrochemistry. It will be a technological challenge to realize a robust and reliable DVD platform, that facilitates optical read-out as well as actuation. The DVD platform will enable a fast and iterative development of hybrid cantilever-like systems which draw upon our more than 10 years experience in the field. These sensors will be realised using Si and polymer based cleanroom fabrication. Focus is on design, fabrication, characterization and applications of cantilever-like sensors and on DVD inspired system integration. By the end of HERMES we will have a unique platform which will be the onset of many new types of specific high –throughput applications and sensor development projects.
Max ERC Funding
2 499 466 €
Duration
Start date: 2013-02-01, End date: 2018-01-31
Project acronym iModel
Project Intelligent Shape Modeling
Researcher (PI) Olga Sorkine
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Digital 3D content creation and modeling has become an indispensable part of our technology-driven society. Any modern design and manufacturing process involves manipulation of digital 3D shapes. Many industries have been long expecting ubiquitous 3D as the next revolution in multimedia. Yet, contrary to “traditional” media such as digital music and video, 3D content creation and editing is not accessible to the general public, and 3D geometric data is not nearly as wide-spread as it has been anticipated. Despite extensive geometric modeling research in the past two decades, 3D modeling is still a restricted domain and demands tedious, time consuming and expensive work effort even from trained professionals, namely engineers, designers, and digital artists. Geometric modeling is reported to constitute one of the lowest-productivity components of product life cycle.
The major reason for 3D shape modeling remaining inaccessible and tedious is that our current geometry representation and modeling algorithms focus on low-level mathematical properties of the shapes, entirely missing structural, contextual or semantic information. As a consequence, current modeling systems are unintuitive, inefficient and difficult for humans to work with. We believe that instead of continuing on the current incremental research path, a concentrated effort is required to fundamentally rethink the shape modeling process and re-align research agendas, putting high-level shape structure and function at the core. We propose a research plan that will lead to intelligent digital 3D modeling tools that integrate semantic knowledge about the objects being modeled and provide the user an intuitive and logical response, fostering creativity and eliminating unnecessary low-level manual modeling tasks. Achieving these goals will represent a fundamental change to our current notion of 3D modeling, and will finally enable us to leverage the true potential of digital 3D content for society.
Summary
Digital 3D content creation and modeling has become an indispensable part of our technology-driven society. Any modern design and manufacturing process involves manipulation of digital 3D shapes. Many industries have been long expecting ubiquitous 3D as the next revolution in multimedia. Yet, contrary to “traditional” media such as digital music and video, 3D content creation and editing is not accessible to the general public, and 3D geometric data is not nearly as wide-spread as it has been anticipated. Despite extensive geometric modeling research in the past two decades, 3D modeling is still a restricted domain and demands tedious, time consuming and expensive work effort even from trained professionals, namely engineers, designers, and digital artists. Geometric modeling is reported to constitute one of the lowest-productivity components of product life cycle.
The major reason for 3D shape modeling remaining inaccessible and tedious is that our current geometry representation and modeling algorithms focus on low-level mathematical properties of the shapes, entirely missing structural, contextual or semantic information. As a consequence, current modeling systems are unintuitive, inefficient and difficult for humans to work with. We believe that instead of continuing on the current incremental research path, a concentrated effort is required to fundamentally rethink the shape modeling process and re-align research agendas, putting high-level shape structure and function at the core. We propose a research plan that will lead to intelligent digital 3D modeling tools that integrate semantic knowledge about the objects being modeled and provide the user an intuitive and logical response, fostering creativity and eliminating unnecessary low-level manual modeling tasks. Achieving these goals will represent a fundamental change to our current notion of 3D modeling, and will finally enable us to leverage the true potential of digital 3D content for society.
Max ERC Funding
1 497 442 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym IMPRO
Project Implicit Programming
Researcher (PI) Viktor Kuncak
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "I propose implicit programming, a paradigm for developing reliable software using new programming language specification constructs and tools, supported through the new notion of software synthesis procedures. The paradigm will enable developers to use specifications as executable programming language constructs and will automate some of the program construction tasks to the point where they become feasible for the end users. Implicit programming will increase developer productivity by enabling developers to focus on the desired software functionality instead of worrying about low-level implementation details. Implicit programming will also improve software reliability, because the presence of specifications will make programs easier to analyze.
From the algorithmic perspective, I propose a new agenda for research in algorithms for decidable logical theories. An input to such an algorithm is a logical formula (or a boolean-valued programming language expressions). Whereas a decision procedure for satisfiability merely checks whether there exists a satisfying assignment for the formula, we propose to develop synthesis procedures. A synthesis procedure views the input as a relation between inputs and outputs, and produces a function from input variables to output variables. In other words, it transforms a specification into a computable function. We will design synthesis procedures for important classes of formulas motivated by useful programming language fragments. We will use synthesis procedures as a compilation mechanism for declarative programming language constructs, ensuring correctness by construction. To develop practical synthesis procedures we will combine insights from decision procedure research (including the results on SMT solvers), with the research on compiler construction, program analysis, and program transformation. The experience from the rich model toolkit initiative (http://RichModels.org) will help us address these goals."
Summary
"I propose implicit programming, a paradigm for developing reliable software using new programming language specification constructs and tools, supported through the new notion of software synthesis procedures. The paradigm will enable developers to use specifications as executable programming language constructs and will automate some of the program construction tasks to the point where they become feasible for the end users. Implicit programming will increase developer productivity by enabling developers to focus on the desired software functionality instead of worrying about low-level implementation details. Implicit programming will also improve software reliability, because the presence of specifications will make programs easier to analyze.
From the algorithmic perspective, I propose a new agenda for research in algorithms for decidable logical theories. An input to such an algorithm is a logical formula (or a boolean-valued programming language expressions). Whereas a decision procedure for satisfiability merely checks whether there exists a satisfying assignment for the formula, we propose to develop synthesis procedures. A synthesis procedure views the input as a relation between inputs and outputs, and produces a function from input variables to output variables. In other words, it transforms a specification into a computable function. We will design synthesis procedures for important classes of formulas motivated by useful programming language fragments. We will use synthesis procedures as a compilation mechanism for declarative programming language constructs, ensuring correctness by construction. To develop practical synthesis procedures we will combine insights from decision procedure research (including the results on SMT solvers), with the research on compiler construction, program analysis, and program transformation. The experience from the rich model toolkit initiative (http://RichModels.org) will help us address these goals."
Max ERC Funding
1 439 240 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym IMPUNEP
Project Innovative Materials Processing Using Non-Equilibrium Plasmas
Researcher (PI) Allan Matthews
Host Institution (HI) THE UNIVERSITY OF MANCHESTER
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary Current bulk materials processing methods are nearing their limit in terms of ability to produce innovative materials with compositional and structural consistency.
The aim of this ambitious project is to remove barriers to materials development, by researching novel methods for the processing of engineering materials, using advanced non-equilibrium plasma systems, to achieve a paradigm shift in the field of materials synthesis. These new processes have the potential to overcome the constraints of existing methods and also be environmentally friendly and produce novel materials with enhanced properties (mechanical, chemical and physical).
The research will utilise plasmas in ways not used before (in bulk materials synthesis rather than thin film formation) and it will investigate different types of plasmas (vacuum, atmospheric and electrolytic), to ensure optimisation of the processing routes across the whole range of material types (including metals, ceramics and composites).
The materials synthesised will have benefits for products across key applications sectors, including energy, healthcare and aerospace. The processes will avoid harmful chemicals and will make optimum use of scarce material resources.
This interdisciplinary project (involving engineers, physicists, chemists and modellers) has fundamental “blue skies” and transformative aspects. It is also high-risk due to the aim to produce “bulk” materials at adequate rates and with consistent uniform structures, compositions and phases (and therefore properties) throughout the material. There are many challenges to overcome, relating to the study of the plasma systems and materials produced; these aspects will be pursued using empirical and modelling approaches. The research will pursue new lines of enquiry using an unconventional synthesis approach whilst operating at the interface with more established discipline areas of plasma physics, materials chemistry, process diagnostics, modelling and control.
Summary
Current bulk materials processing methods are nearing their limit in terms of ability to produce innovative materials with compositional and structural consistency.
The aim of this ambitious project is to remove barriers to materials development, by researching novel methods for the processing of engineering materials, using advanced non-equilibrium plasma systems, to achieve a paradigm shift in the field of materials synthesis. These new processes have the potential to overcome the constraints of existing methods and also be environmentally friendly and produce novel materials with enhanced properties (mechanical, chemical and physical).
The research will utilise plasmas in ways not used before (in bulk materials synthesis rather than thin film formation) and it will investigate different types of plasmas (vacuum, atmospheric and electrolytic), to ensure optimisation of the processing routes across the whole range of material types (including metals, ceramics and composites).
The materials synthesised will have benefits for products across key applications sectors, including energy, healthcare and aerospace. The processes will avoid harmful chemicals and will make optimum use of scarce material resources.
This interdisciplinary project (involving engineers, physicists, chemists and modellers) has fundamental “blue skies” and transformative aspects. It is also high-risk due to the aim to produce “bulk” materials at adequate rates and with consistent uniform structures, compositions and phases (and therefore properties) throughout the material. There are many challenges to overcome, relating to the study of the plasma systems and materials produced; these aspects will be pursued using empirical and modelling approaches. The research will pursue new lines of enquiry using an unconventional synthesis approach whilst operating at the interface with more established discipline areas of plasma physics, materials chemistry, process diagnostics, modelling and control.
Max ERC Funding
2 499 283 €
Duration
Start date: 2013-02-01, End date: 2018-09-30
Project acronym INSILICO-CELL
Project Predictive modelling and simulation in mechano-chemo-biology: a computer multi-approach
Researcher (PI) Jose Manuel Garcia-Aznar
Host Institution (HI) UNIVERSIDAD DE ZARAGOZA
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Living tissues are regulated by multi-cellular collectives mediated at cellular level through complex interactions between mechanical and biochemical factors. A further understanding of these mechanisms could provide new insights in the development of therapies and diagnosis techniques, reducing animal experiments. I propose a combined and complementary methodology to advance in the knowledge of how cells interact with each other and with the environment to produce the large-scale organization typical of tissues. I will couple in-silico and in-vitro models for investigating the micro-fabrication of tissues in-vitro using a 3D multicellular environment. By computational cell-based modelling of tissue development, I will use a multiscale and multiphysics approach to investigate various key factors: how environmental conditions (mechanical and biochemical) drive cell behaviour, how individual cell behaviour produces multicellular patterns, how cells respond to the multicellular environment, how cells are able to fabricate new tissues and how cell-matrix interactions affect these processes. In-vitro experiments will be developed to validate numerical models, determine their parameters, improve their hypotheses and help designing new experiments. The in-vitro experiments will be performed in a microfluidic platform capable of controlling biochemical and mechanical conditions in a 3D environment. This research will be applied in three applications, where the role of environment conditions is important and the main biological events are cell migration, cell-matrix and cell-cell interactions: bone regeneration, wound healing and angiogenesis.
Summary
Living tissues are regulated by multi-cellular collectives mediated at cellular level through complex interactions between mechanical and biochemical factors. A further understanding of these mechanisms could provide new insights in the development of therapies and diagnosis techniques, reducing animal experiments. I propose a combined and complementary methodology to advance in the knowledge of how cells interact with each other and with the environment to produce the large-scale organization typical of tissues. I will couple in-silico and in-vitro models for investigating the micro-fabrication of tissues in-vitro using a 3D multicellular environment. By computational cell-based modelling of tissue development, I will use a multiscale and multiphysics approach to investigate various key factors: how environmental conditions (mechanical and biochemical) drive cell behaviour, how individual cell behaviour produces multicellular patterns, how cells respond to the multicellular environment, how cells are able to fabricate new tissues and how cell-matrix interactions affect these processes. In-vitro experiments will be developed to validate numerical models, determine their parameters, improve their hypotheses and help designing new experiments. The in-vitro experiments will be performed in a microfluidic platform capable of controlling biochemical and mechanical conditions in a 3D environment. This research will be applied in three applications, where the role of environment conditions is important and the main biological events are cell migration, cell-matrix and cell-cell interactions: bone regeneration, wound healing and angiogenesis.
Max ERC Funding
1 299 083 €
Duration
Start date: 2012-11-01, End date: 2018-05-31
Project acronym INTECOCIS
Project Introducing Exascale Computing in combustion instabilities Simulations (INTECOCIS)
Researcher (PI) Thierry Poinsot
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary "INTECOCIS is a project on energy production by combustion built by IMFT (experiments, theory and instabilities) and CERFACS (numerical simulation). Combustion produces 90 percent of the earth energy and will remain our first energy source for a long time. Optimizing combustors is a key issue to burn fossil and renewable fuels more efficiently but also to replace wind or solar energy production on days without sun or wind. This optimization cannot take place without numerical simulation (‘virtual combustors’) that allows to test designs without building them. These virtual combustors cannot account for combustion instabilities (CI) which are a major risk in combustors where they induce vibration, loss of control and destruction. CIs cannot be predicted reliably today. INTECOCIS aims at introducing recent progress in High Performance Computing (HPC) into studies of CIs, to build simulation tools running on massively parallel computers that can predict CIs in future combustors and assess methods to control them. To achieve this goal, the simulations used today for CIs will be revolutionized to integrate recent HPC capacities and have the capabilities and brute power required to compute and control CI phenomena. A second objective of INTECOCIS is to distribute these HPC-based tools in Europe. These tools will integrate UQ (uncertainty quantification) methodologies to quantify the uncertainties associated with the simulations because CIs are sensitive to small changes in geometry, fuel composition or boundary conditions. Moreover, simulation tools also contain uncertain parameters (numerical methods, space and time discretization, impedances, physical sub models) that will have to be investigated as well. Most of the work will be theoretical and numerical but INTECOCIS will also include validation on laboratory burners (at IMFT and other laboratories in Europe) as well as applications on real combustors for European companies collaborating with IMFT and CERFACS."
Summary
"INTECOCIS is a project on energy production by combustion built by IMFT (experiments, theory and instabilities) and CERFACS (numerical simulation). Combustion produces 90 percent of the earth energy and will remain our first energy source for a long time. Optimizing combustors is a key issue to burn fossil and renewable fuels more efficiently but also to replace wind or solar energy production on days without sun or wind. This optimization cannot take place without numerical simulation (‘virtual combustors’) that allows to test designs without building them. These virtual combustors cannot account for combustion instabilities (CI) which are a major risk in combustors where they induce vibration, loss of control and destruction. CIs cannot be predicted reliably today. INTECOCIS aims at introducing recent progress in High Performance Computing (HPC) into studies of CIs, to build simulation tools running on massively parallel computers that can predict CIs in future combustors and assess methods to control them. To achieve this goal, the simulations used today for CIs will be revolutionized to integrate recent HPC capacities and have the capabilities and brute power required to compute and control CI phenomena. A second objective of INTECOCIS is to distribute these HPC-based tools in Europe. These tools will integrate UQ (uncertainty quantification) methodologies to quantify the uncertainties associated with the simulations because CIs are sensitive to small changes in geometry, fuel composition or boundary conditions. Moreover, simulation tools also contain uncertain parameters (numerical methods, space and time discretization, impedances, physical sub models) that will have to be investigated as well. Most of the work will be theoretical and numerical but INTECOCIS will also include validation on laboratory burners (at IMFT and other laboratories in Europe) as well as applications on real combustors for European companies collaborating with IMFT and CERFACS."
Max ERC Funding
2 488 656 €
Duration
Start date: 2013-02-01, End date: 2018-01-31
Project acronym INTEG-CV-SIM
Project An Integrated Computer Modelling Framework for Subject-Specific Cardiovascular Simulation: Applications to Disease Research, Treatment Planning, and Medical Device Design
Researcher (PI) Carlos Alberto Figueroa Alvarez
Host Institution (HI) KING'S COLLEGE LONDON
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Advances in numerical methods and three-dimensional imaging techniques have enabled the quantification of cardiovascular mechanics in subject-specific anatomic and physiologic models. Research efforts have been focused mainly on three areas: pathogenesis of vascular disease, development of medical devices, and virtual surgical planning. However, despite great initial promise, the actual use of patient-specific computer modelling in the clinic has been very limited. Clinical diagnosis still relies entirely on traditional methods based on imaging and invasive measurements and sampling. The same invasive trial-and-error paradigm is often seen in vascular disease research, where animal models are used profusely to quantify simple metrics that could perhaps be evaluated via non-invasive computer modelling techniques. Lastly, medical device manufacturers rely mostly on in-vitro models to investigate the anatomic variations, arterial deformations, and biomechanical forces needed for the design of stents and stent-grafts. In this project, I aim to develop an integrated image-based computer modelling framework for subject-specific cardiovascular simulation with dynamically adapting boundary conditions capable of representing alterations in the physiologic state of the patient. This computer framework will be directly applied in clinical settings to complement and enhance current diagnostic practices, working towards the goal of personalized cardiovascular medicine.
Summary
Advances in numerical methods and three-dimensional imaging techniques have enabled the quantification of cardiovascular mechanics in subject-specific anatomic and physiologic models. Research efforts have been focused mainly on three areas: pathogenesis of vascular disease, development of medical devices, and virtual surgical planning. However, despite great initial promise, the actual use of patient-specific computer modelling in the clinic has been very limited. Clinical diagnosis still relies entirely on traditional methods based on imaging and invasive measurements and sampling. The same invasive trial-and-error paradigm is often seen in vascular disease research, where animal models are used profusely to quantify simple metrics that could perhaps be evaluated via non-invasive computer modelling techniques. Lastly, medical device manufacturers rely mostly on in-vitro models to investigate the anatomic variations, arterial deformations, and biomechanical forces needed for the design of stents and stent-grafts. In this project, I aim to develop an integrated image-based computer modelling framework for subject-specific cardiovascular simulation with dynamically adapting boundary conditions capable of representing alterations in the physiologic state of the patient. This computer framework will be directly applied in clinical settings to complement and enhance current diagnostic practices, working towards the goal of personalized cardiovascular medicine.
Max ERC Funding
1 491 593 €
Duration
Start date: 2012-12-01, End date: 2018-11-30
Project acronym INVARIANTCLASS
Project Invariant Representations for High-Dimensional Signal Classifications
Researcher (PI) Stéphane Mallat
Host Institution (HI) ECOLE NORMALE SUPERIEURE
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary Considerable amounts of high-dimensional signals are continuously being acquired, whether audio, images, videos, or specialized signals for example in geophysics or medicine. Automatic classification and retrieval is strongly needed to analyze and access these massive data sets, but current algorithms often produce too many errors. For high-dimensional signals, supervised classification algorithms are typically applied to reduced ``feature vectors''. These feature representations are specialized for each signal modality, for example speech, music, images, videos or seismic signals. This proposal aims at unifying these approaches to improve classification performances, by developing a general mathematical and algorithmic framework to optimize representations for classification. Classification errors result from representations which are not sufficiently informative or which maintain too much variability. The central challenge is to understand how to construct stable, informative invariants, while facing progressively more complex sources of variability. The first task concentrates on invariants to the action of finite groups including translations, rotations and scalings, while preserving stability to deformations. The second task addresses unsupervised representation learning from training data. The third task explores stable representations of invariant geometric signal structures, which is an outstanding problem.These challenges involve building new mathematical tools in harmonic and wavelet analysis, geometry and statistics, in close interaction with numerical algorithms. Classification applications to audio, images, video signals or geophysical signals are expected to serve as a basis for groundbreaking technological advances.
Summary
Considerable amounts of high-dimensional signals are continuously being acquired, whether audio, images, videos, or specialized signals for example in geophysics or medicine. Automatic classification and retrieval is strongly needed to analyze and access these massive data sets, but current algorithms often produce too many errors. For high-dimensional signals, supervised classification algorithms are typically applied to reduced ``feature vectors''. These feature representations are specialized for each signal modality, for example speech, music, images, videos or seismic signals. This proposal aims at unifying these approaches to improve classification performances, by developing a general mathematical and algorithmic framework to optimize representations for classification. Classification errors result from representations which are not sufficiently informative or which maintain too much variability. The central challenge is to understand how to construct stable, informative invariants, while facing progressively more complex sources of variability. The first task concentrates on invariants to the action of finite groups including translations, rotations and scalings, while preserving stability to deformations. The second task addresses unsupervised representation learning from training data. The third task explores stable representations of invariant geometric signal structures, which is an outstanding problem.These challenges involve building new mathematical tools in harmonic and wavelet analysis, geometry and statistics, in close interaction with numerical algorithms. Classification applications to audio, images, video signals or geophysical signals are expected to serve as a basis for groundbreaking technological advances.
Max ERC Funding
2 316 000 €
Duration
Start date: 2013-03-01, End date: 2019-02-28
Project acronym IP4EC
Project Image processing for enhanced cinematography
Researcher (PI) Marcelo Bertalmío Barate
Host Institution (HI) UNIVERSIDAD POMPEU FABRA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary The objective is to develop image processing algorithms for cinema that allow people watching a movie on a screen to see the same details and colors as people at the shooting location can. It is due to camera and display limitations that the shooting location and the images on the screen are perceived very differently.
We want to be able to use common cameras and displays (as opposed to highly expensive hardware systems) and work solely on processing the video so that our perception of the scene and of the images on the screen match, without having to add artifical lights when shooting or to manually correct the colors to adapt to a particular display device.
Given that in terms of sensing capabilities cameras are in most regards better than human photoreceptors, the superiority of human vision over camera systems lies in the better processing which is carried out in the retina and visual cortex. Therefore, rather than working on the hardware, improving lenses and sensors, we will instead use, whenever possible, existing knowledge on visual neuroscience and models on visual perception to develop software methods mimicking neural processes in the human visual system, and apply these methods to images captured with a regular camera.
From a technological standpoint, reaching our goal will be a remarkable achievement which will impact how movies are made (in less time, with less equipment, with smaller crews, with more artistic freedom) but also which movies are made (since good-visual-quality productions will become more affordable.) We also anticipate a considerable technological impact in the realm of consumer video.
From a scientific standpoint, this will imply finding solutions for several challenging open problems in image processing and computer vision, but it also has a strong potential to bring methodological advances to other domains like experimental psychology and visual neuroscience.
Summary
The objective is to develop image processing algorithms for cinema that allow people watching a movie on a screen to see the same details and colors as people at the shooting location can. It is due to camera and display limitations that the shooting location and the images on the screen are perceived very differently.
We want to be able to use common cameras and displays (as opposed to highly expensive hardware systems) and work solely on processing the video so that our perception of the scene and of the images on the screen match, without having to add artifical lights when shooting or to manually correct the colors to adapt to a particular display device.
Given that in terms of sensing capabilities cameras are in most regards better than human photoreceptors, the superiority of human vision over camera systems lies in the better processing which is carried out in the retina and visual cortex. Therefore, rather than working on the hardware, improving lenses and sensors, we will instead use, whenever possible, existing knowledge on visual neuroscience and models on visual perception to develop software methods mimicking neural processes in the human visual system, and apply these methods to images captured with a regular camera.
From a technological standpoint, reaching our goal will be a remarkable achievement which will impact how movies are made (in less time, with less equipment, with smaller crews, with more artistic freedom) but also which movies are made (since good-visual-quality productions will become more affordable.) We also anticipate a considerable technological impact in the realm of consumer video.
From a scientific standpoint, this will imply finding solutions for several challenging open problems in image processing and computer vision, but it also has a strong potential to bring methodological advances to other domains like experimental psychology and visual neuroscience.
Max ERC Funding
1 499 160 €
Duration
Start date: 2012-10-01, End date: 2018-03-31
Project acronym L3VISU
Project Life Long Learning for Visual Scene Understanding (L3ViSU)
Researcher (PI) Christoph Lampert
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "My goal in the project is to develop and analyze algorithms that use continuous, open-ended machine learning from visual input data (images and videos) in order to interpret visual scenes on a level comparable to humans.
L3ViSU is based on the hypothesis that we can only significantly improve the state of the art in computer vision algorithms by giving them access to background and contextual knowledge about the visual world, and that the most feasible way to obtain such knowledge is by extracting it (semi-) automatically from incoming visual stimuli. Consequently, at the core of L3ViSU lies the idea of life-long visual learning.
Sufficient data for such an effort is readily available, e.g. through digital TV-channels and media-
sharing Internet platforms, but the question of how to use these resources for building better computer vision systems is wide open. In L3ViSU we will rely on modern machine learning concepts, representing task-independent prior knowledge as prior distributions and function regularizers. This functional form allows them to help solving specific tasks by guiding the solution to ""reasonable"" ones, and to suppress mistakes that violate ""common sense"". The result will not only be improved prediction quality, but also a reduction in the amount of manual supervision necessary, and the possibility to introduce more semantics into computer vision, which has recently been identified as one of the major tasks for the next decade.
L3ViSU is a project on the interface between computer vision and machine learning. Solving it requires expertise in both areas, as it is represented in my research group at IST Austria. The life-long learning concepts developed within L3ViSU, however, will have impact outside of both areas, let it be as basis of life-long learning system with a different focus, such as in bioinformatics, or as a foundation for projects of commercial value, such as more intelligent driver assistance or video surveillance systems."
Summary
"My goal in the project is to develop and analyze algorithms that use continuous, open-ended machine learning from visual input data (images and videos) in order to interpret visual scenes on a level comparable to humans.
L3ViSU is based on the hypothesis that we can only significantly improve the state of the art in computer vision algorithms by giving them access to background and contextual knowledge about the visual world, and that the most feasible way to obtain such knowledge is by extracting it (semi-) automatically from incoming visual stimuli. Consequently, at the core of L3ViSU lies the idea of life-long visual learning.
Sufficient data for such an effort is readily available, e.g. through digital TV-channels and media-
sharing Internet platforms, but the question of how to use these resources for building better computer vision systems is wide open. In L3ViSU we will rely on modern machine learning concepts, representing task-independent prior knowledge as prior distributions and function regularizers. This functional form allows them to help solving specific tasks by guiding the solution to ""reasonable"" ones, and to suppress mistakes that violate ""common sense"". The result will not only be improved prediction quality, but also a reduction in the amount of manual supervision necessary, and the possibility to introduce more semantics into computer vision, which has recently been identified as one of the major tasks for the next decade.
L3ViSU is a project on the interface between computer vision and machine learning. Solving it requires expertise in both areas, as it is represented in my research group at IST Austria. The life-long learning concepts developed within L3ViSU, however, will have impact outside of both areas, let it be as basis of life-long learning system with a different focus, such as in bioinformatics, or as a foundation for projects of commercial value, such as more intelligent driver assistance or video surveillance systems."
Max ERC Funding
1 464 712 €
Duration
Start date: 2013-01-01, End date: 2018-06-30
Project acronym LEAD
Project Lower Extremity Amputee Dynamics: Simulating the Motion of an Above-Knee Amputee’s Stump by Means of a Novel EMG-Integrated 3D Musculoskeletal Forward-Dynamics Modelling Approach
Researcher (PI) Oliver Röhrle
Host Institution (HI) UNIVERSITAET STUTTGART
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary "Wearing sub-optimally fitted lower limb prosthesis cause disorders of the stump that strongly lessens the well-being and the performance of an amputee. As experimental measurements are currently not capable of providing enough insights in the dynamic behaviour of the stump, simulations need to be employed to achieve the necessary knowledge gain to significantly improve the socket design and, hence, to increase the amputee’s well-being and performance. The overall goal of this proposal is to provide the enabling technology in form of novel computational and experimental methodologies to assist the design process of next-generation prosthetic devices. The focus hereby is to gain a better understanding of the dynamics of the musculoskeletal system of a lower extremity amputee, here, the stump of an above-knee amputee. To achieve this, LEAD pursues two aims. The first and main aim focuses on substantially changing existing modelling philosophies and methodologies of forward dynamics approaches such that they are capable of representing muscles, bone, and skin as 3D continuum-mechanical objects. To counteract the increase of computational cost by switching from 1D lumped-parameter models to 3D models, novel, elegant, and efficient algorithms, e.g. nested iteration techniques tuned for efficiency through model-based coupling strategies and optimised solvers, need to be developed. The second aim is to experimentally measure physical quantities that provide the necessary input to drive the forward dynamics model, e.g. EMG, and to provide means of validation, e.g. with respect to pressure measurements, ultrasound recordings, and motion capture. Given the non-existing field of forward dynamics appealing to continuum-mechanical skeletal muscle models, LEAD creates a new field of research."
Summary
"Wearing sub-optimally fitted lower limb prosthesis cause disorders of the stump that strongly lessens the well-being and the performance of an amputee. As experimental measurements are currently not capable of providing enough insights in the dynamic behaviour of the stump, simulations need to be employed to achieve the necessary knowledge gain to significantly improve the socket design and, hence, to increase the amputee’s well-being and performance. The overall goal of this proposal is to provide the enabling technology in form of novel computational and experimental methodologies to assist the design process of next-generation prosthetic devices. The focus hereby is to gain a better understanding of the dynamics of the musculoskeletal system of a lower extremity amputee, here, the stump of an above-knee amputee. To achieve this, LEAD pursues two aims. The first and main aim focuses on substantially changing existing modelling philosophies and methodologies of forward dynamics approaches such that they are capable of representing muscles, bone, and skin as 3D continuum-mechanical objects. To counteract the increase of computational cost by switching from 1D lumped-parameter models to 3D models, novel, elegant, and efficient algorithms, e.g. nested iteration techniques tuned for efficiency through model-based coupling strategies and optimised solvers, need to be developed. The second aim is to experimentally measure physical quantities that provide the necessary input to drive the forward dynamics model, e.g. EMG, and to provide means of validation, e.g. with respect to pressure measurements, ultrasound recordings, and motion capture. Given the non-existing field of forward dynamics appealing to continuum-mechanical skeletal muscle models, LEAD creates a new field of research."
Max ERC Funding
1 676 760 €
Duration
Start date: 2012-11-01, End date: 2017-10-31
Project acronym LEBMEC
Project Laser-engineered Biomimetic Matrices with Embedded Cells
Researcher (PI) Aleksandr Ovsianikov
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Traditional 2D cell culture systems used in biology do not accurately reproduce the 3D structure, function, or physiology of living tissue. Resulting behaviour and responses of cells are substantially different from those observed within natural extracellular matrices (ECM). The early designs of 3D cell-culture matrices focused on their bulk properties, while disregarding individual cell environment. However, recent findings indicate that the role of the ECM extends beyond a simple structural support to regulation of cell and tissue function. So far the mechanisms of this regulation are not fully understood, due to technical limitations of available research tools, diversity of tissues and complexity of cell-matrix interactions.
The main goal of this project is to develop a versatile and straightforward method, enabling systematic studies of cell-matrix interactions. 3D CAD matrices will be produced by femtosecond laser-induced polymerization of hydrogels with cells in them. Cell embedment results in a tissue-like intimate cell-matrix contact and appropriate cell densities right from the start.
A unique advantage of the LeBMEC is its capability to alter on demand a multitude of individual properties of produced 3D matrices, including: geometry, stiffness, and cell adhesion properties. It allows us systematically reconstruct and identify the key biomimetic properties of the ECM in vitro. The particular focus of this project is on the role of local mechanical properties of produced hydrogel constructs. It is known that, stem cells on soft 2D substrates differentiate into neurons, stiffer substrates induce bone cells, and intermediate ones result in myoblasts. With LeBMEC, a controlled distribution of site-specific stiffness within the same hydrogel matrix can be achieved in 3D. This way, by rational design of cell-culture matrices initially embedding only stem cells, for realisation of precisely defined 3D multi-tissue constructs, is possible for the first time.
Summary
Traditional 2D cell culture systems used in biology do not accurately reproduce the 3D structure, function, or physiology of living tissue. Resulting behaviour and responses of cells are substantially different from those observed within natural extracellular matrices (ECM). The early designs of 3D cell-culture matrices focused on their bulk properties, while disregarding individual cell environment. However, recent findings indicate that the role of the ECM extends beyond a simple structural support to regulation of cell and tissue function. So far the mechanisms of this regulation are not fully understood, due to technical limitations of available research tools, diversity of tissues and complexity of cell-matrix interactions.
The main goal of this project is to develop a versatile and straightforward method, enabling systematic studies of cell-matrix interactions. 3D CAD matrices will be produced by femtosecond laser-induced polymerization of hydrogels with cells in them. Cell embedment results in a tissue-like intimate cell-matrix contact and appropriate cell densities right from the start.
A unique advantage of the LeBMEC is its capability to alter on demand a multitude of individual properties of produced 3D matrices, including: geometry, stiffness, and cell adhesion properties. It allows us systematically reconstruct and identify the key biomimetic properties of the ECM in vitro. The particular focus of this project is on the role of local mechanical properties of produced hydrogel constructs. It is known that, stem cells on soft 2D substrates differentiate into neurons, stiffer substrates induce bone cells, and intermediate ones result in myoblasts. With LeBMEC, a controlled distribution of site-specific stiffness within the same hydrogel matrix can be achieved in 3D. This way, by rational design of cell-culture matrices initially embedding only stem cells, for realisation of precisely defined 3D multi-tissue constructs, is possible for the first time.
Max ERC Funding
1 440 594 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym MAtrix
Project In silico and in vitro Models of Angiogenesis: unravelling the role of the extracellular matrix
Researcher (PI) Hans Pol S Van Oosterwyck
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Angiogenesis, the formation of new blood vessels from the existing vasculature, is a process that is fundamental to normal tissue growth, wound repair and disease. The control of angiogenesis is of utmost importance for tissue regenerative therapies as well as cancer treatment, however this remains a challenge. The extracellular matrix (ECM) is a one of the key controlling factors of angiogenesis. The mechanisms through which the ECM exerts its influence are poorly understood. MAtrix will create unprecedented opportunities for unraveling the role of the ECM in angiogenesis. It will do so by creating a highly innovative, multiscale in silico model that provides quantitative, subcellular resolution on cell-matrix interaction, which is key to the understanding of cell migration. In this way, MAtrix goes substantially beyond the state of the art in terms of computational models of angiogenesis. It will integrate mechanisms of ECM-mediated cell migration and relate them to intracellular regulatory mechanisms of angiogenesis.
Apart from its innovation in terms of computational modelling, MAtrix’ impact is related to its interdisciplinarity, involving computer simulations and in vitro experiments. This will enable to investigate research hypotheses on the role of the ECM in angiogenesis that are generated by the in silico model. State of the art technologies (fluorescence microscopy, cell and ECM mechanics, biomaterials design) will be applied –in conjunction with the in silico model- to quantity cell-ECM mechanical interaction at a subcellular level and the dynamics of cell migration. In vitro experiments will be performed for a broad range of biomaterials and their characteristics. In this way, MAtrix will deliver a proof-of-concept that an in silico model can help in identifying and prioritising biomaterials characteristics, relevant for angiogenesis. MAtrix’ findings can have a major impact on the development of therapies that want to control the angiogenic response.
Summary
Angiogenesis, the formation of new blood vessels from the existing vasculature, is a process that is fundamental to normal tissue growth, wound repair and disease. The control of angiogenesis is of utmost importance for tissue regenerative therapies as well as cancer treatment, however this remains a challenge. The extracellular matrix (ECM) is a one of the key controlling factors of angiogenesis. The mechanisms through which the ECM exerts its influence are poorly understood. MAtrix will create unprecedented opportunities for unraveling the role of the ECM in angiogenesis. It will do so by creating a highly innovative, multiscale in silico model that provides quantitative, subcellular resolution on cell-matrix interaction, which is key to the understanding of cell migration. In this way, MAtrix goes substantially beyond the state of the art in terms of computational models of angiogenesis. It will integrate mechanisms of ECM-mediated cell migration and relate them to intracellular regulatory mechanisms of angiogenesis.
Apart from its innovation in terms of computational modelling, MAtrix’ impact is related to its interdisciplinarity, involving computer simulations and in vitro experiments. This will enable to investigate research hypotheses on the role of the ECM in angiogenesis that are generated by the in silico model. State of the art technologies (fluorescence microscopy, cell and ECM mechanics, biomaterials design) will be applied –in conjunction with the in silico model- to quantity cell-ECM mechanical interaction at a subcellular level and the dynamics of cell migration. In vitro experiments will be performed for a broad range of biomaterials and their characteristics. In this way, MAtrix will deliver a proof-of-concept that an in silico model can help in identifying and prioritising biomaterials characteristics, relevant for angiogenesis. MAtrix’ findings can have a major impact on the development of therapies that want to control the angiogenic response.
Max ERC Funding
1 497 400 €
Duration
Start date: 2013-04-01, End date: 2018-03-31
Project acronym MEMSforLife
Project Microfluidic systems for the study of living roundworms (Caenorhabditis elegans) and tissues
Researcher (PI) Martinus Adela Maria Gijs
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary This proposal situated at the interfaces of the microengineering, biological and medical fields aims to develop microfluidic chips for studying living roundworms (Caenorhabditis elegans), living cultured liver tissue slices obtained from mice, and formaldehyde/paraffin-fixed human breast cancer tissue slices and tumors. Each type of microfluidic chip will be the central component of a computer-controlled platform having syringe pumps for accurate dosing of reagents and allowing microscopic observation or other types of detection. From an application point-of-view the work is focused on five objectives: (i) Development of high-throughput worm chips. Our goal is to build worm tools that enable high-throughput lifespan and behavioral measurements at single-animal resolution with statistical relevance. (ii) Linking on-chip microparticles (beads) to the C. elegans cuticle. We will use beads with electrostatic surface charges and beads that have a magnetic core for quantification of locomotion and forces developed by the worms. Moreover high-refractive index microspheres will be used as in situ microlenses for optical nanoscopic worm imaging. (iii) Realization of a nanocalorimetric chip-based setup to determine the minute amount of heat produced by worms and comparison of the metabolic activity of wild-type worms and mutants. (iv) Study of precision-cut ex vivo liver tissue slices from mice, in particular to evaluate glucose synthesis. The slices will be perifused with nutrients and oxygen in a continuous way and glucose detection will be based on the electrochemical principle using microfabricated electrodes. (v) On-chip immunohistochemical processing and fluorescent imaging of fixed clinical tissue slices and tumorectomy samples. These systems aim the multiplexed detection of biomarkers on cancerous tissues for fast and accurate clinical diagnosis.
Summary
This proposal situated at the interfaces of the microengineering, biological and medical fields aims to develop microfluidic chips for studying living roundworms (Caenorhabditis elegans), living cultured liver tissue slices obtained from mice, and formaldehyde/paraffin-fixed human breast cancer tissue slices and tumors. Each type of microfluidic chip will be the central component of a computer-controlled platform having syringe pumps for accurate dosing of reagents and allowing microscopic observation or other types of detection. From an application point-of-view the work is focused on five objectives: (i) Development of high-throughput worm chips. Our goal is to build worm tools that enable high-throughput lifespan and behavioral measurements at single-animal resolution with statistical relevance. (ii) Linking on-chip microparticles (beads) to the C. elegans cuticle. We will use beads with electrostatic surface charges and beads that have a magnetic core for quantification of locomotion and forces developed by the worms. Moreover high-refractive index microspheres will be used as in situ microlenses for optical nanoscopic worm imaging. (iii) Realization of a nanocalorimetric chip-based setup to determine the minute amount of heat produced by worms and comparison of the metabolic activity of wild-type worms and mutants. (iv) Study of precision-cut ex vivo liver tissue slices from mice, in particular to evaluate glucose synthesis. The slices will be perifused with nutrients and oxygen in a continuous way and glucose detection will be based on the electrochemical principle using microfabricated electrodes. (v) On-chip immunohistochemical processing and fluorescent imaging of fixed clinical tissue slices and tumorectomy samples. These systems aim the multiplexed detection of biomarkers on cancerous tissues for fast and accurate clinical diagnosis.
Max ERC Funding
2 492 400 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym MLCS
Project Machine learning for computational science:
statistical and formal modelling of biological systems
Researcher (PI) Guido Sanguinetti
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Computational modelling is changing the face of science. Many complex systems can be understood as embodied computational systems performing distributed computations on a massive scale. Biology is the discipline where these ideas find their most natural application: cells can be viewed as input/ output devices, with proteins and organelles behaving as finite state machines performing distributed computations inside the cell. This led to the influential framework of cell as computation, and the successful deployment of formal verification and analysis on models of biological systems.
This paradigm shift in our understanding of biology has been possible due to the increasingly quantitative experimental techniques being developed in experimental biology. Formal modelling techniques, however, do not have mechanisms to directly include the information obtained from experimental observations in a statistically consistent way. This difficulty in relating the experimental and theoretical developments in biology is a central problem: without incorporating observations, it is extremely difficult to obtain reliable parametrisations of models. More importantly, it is impossible to assess the confidence of model predictions. This means that the central scientific task of falsifying hypotheses cannot be performed in a statistically meaningful way, and that it is very difficult to employ model predictions to rationally plan novel experiments.
In this project we will build and develop machine learning tools for continuous time stochastic processes to obtain a principled treatment of the uncertainty at every step of the modelling pipeline. We will use and extend probabilistic programming languages to fully automate the inference tasks, and link to advanced modelling languages to allow formal analysis tools to be deployed in a data modelling framework. We will pursue twoapplications to fundamental problems in systems biology, guaranteeing impact on exciting scientific questions.
Summary
Computational modelling is changing the face of science. Many complex systems can be understood as embodied computational systems performing distributed computations on a massive scale. Biology is the discipline where these ideas find their most natural application: cells can be viewed as input/ output devices, with proteins and organelles behaving as finite state machines performing distributed computations inside the cell. This led to the influential framework of cell as computation, and the successful deployment of formal verification and analysis on models of biological systems.
This paradigm shift in our understanding of biology has been possible due to the increasingly quantitative experimental techniques being developed in experimental biology. Formal modelling techniques, however, do not have mechanisms to directly include the information obtained from experimental observations in a statistically consistent way. This difficulty in relating the experimental and theoretical developments in biology is a central problem: without incorporating observations, it is extremely difficult to obtain reliable parametrisations of models. More importantly, it is impossible to assess the confidence of model predictions. This means that the central scientific task of falsifying hypotheses cannot be performed in a statistically meaningful way, and that it is very difficult to employ model predictions to rationally plan novel experiments.
In this project we will build and develop machine learning tools for continuous time stochastic processes to obtain a principled treatment of the uncertainty at every step of the modelling pipeline. We will use and extend probabilistic programming languages to fully automate the inference tasks, and link to advanced modelling languages to allow formal analysis tools to be deployed in a data modelling framework. We will pursue twoapplications to fundamental problems in systems biology, guaranteeing impact on exciting scientific questions.
Max ERC Funding
1 421 944 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym MORPHOSIS
Project Morphing Locally and Globally Structures with Multiscale Intelligence by Mimicking Nature
Researcher (PI) Giulia Lanzara
Host Institution (HI) UNIVERSITA DEGLI STUDI ROMA TRE
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary The objective of the proposed research is to engineer novel multifunctional morphing materials drawing inspiration from biological systems that are known to possess distributed sensing capabilities which in turn guide their local and global morphing. This will be achieved through the development of novel multi-scale technologies (nano- to macro) and materials that, once integrated, will allow distributed local/global sensing and morphing capabilities that can be exploited for structural as well as for eminently flexible applications. The distributed local/global morphing and sensing will be delivered by fabricating at the microscale a non-invasive, light-weight, flexible and highly expandable active network with enhanced actuation capabilities and a neurological sensor network. The networks are then expanded to the macro-scale prior being integrated in a flexible material or in an innovative multi-stable shape memory carbon-fiber composite. The sensor network has to monitor environmental and loading conditions. These data are then used to control the deformation of the active network which can deliver local (roughness changes as in dolphins skin for instance for drag reduction) or global morphing (e.g. for deformable textiles as in insect wings) in flexible materials. The multi-stable carbon-fiber composite can be used in conjunction with these two functions so as to achieve advanced morphing in structural applications (e.g., birds wings vs. aircrafts wings). The composite, with a shape memory resin as hosting matrix, due to its rigidity and sensitivity to temperature variations, can snap from one configuration to the other. The speed of the purposefully-introduced snapping-through process will be tuned with the help of the integrated active network. This research has the potential to pave the way toward the development of new multidisciplinary research fields and could revolutionarize the design and production of future structures in a variety of fields.
Summary
The objective of the proposed research is to engineer novel multifunctional morphing materials drawing inspiration from biological systems that are known to possess distributed sensing capabilities which in turn guide their local and global morphing. This will be achieved through the development of novel multi-scale technologies (nano- to macro) and materials that, once integrated, will allow distributed local/global sensing and morphing capabilities that can be exploited for structural as well as for eminently flexible applications. The distributed local/global morphing and sensing will be delivered by fabricating at the microscale a non-invasive, light-weight, flexible and highly expandable active network with enhanced actuation capabilities and a neurological sensor network. The networks are then expanded to the macro-scale prior being integrated in a flexible material or in an innovative multi-stable shape memory carbon-fiber composite. The sensor network has to monitor environmental and loading conditions. These data are then used to control the deformation of the active network which can deliver local (roughness changes as in dolphins skin for instance for drag reduction) or global morphing (e.g. for deformable textiles as in insect wings) in flexible materials. The multi-stable carbon-fiber composite can be used in conjunction with these two functions so as to achieve advanced morphing in structural applications (e.g., birds wings vs. aircrafts wings). The composite, with a shape memory resin as hosting matrix, due to its rigidity and sensitivity to temperature variations, can snap from one configuration to the other. The speed of the purposefully-introduced snapping-through process will be tuned with the help of the integrated active network. This research has the potential to pave the way toward the development of new multidisciplinary research fields and could revolutionarize the design and production of future structures in a variety of fields.
Max ERC Funding
1 664 600 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym MQC
Project Methods for Quantum Computing
Researcher (PI) Andris Ambainis
Host Institution (HI) LATVIJAS UNIVERSITATE
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary "Quantum information science (QIS) is a young research area at the frontier of both computer science and physics. It studies what happens when we apply the principles of quantum mechanics to problems in computer science and information processing. This has resulted in many unexpected discoveries and opened up new frontiers.
Quantum algorithms (such as Shor’s factoring algorithm) can solve computational problems that are intractable for conventional computers. Quantum mechanics also enables quantum cryptography which provides an ultimate degree of security that cannot be achieved by conventional methods. These developments have generated an enormous interest both in building a quantum computer and exploring the mathematical foundations of quantum information.
We will study computer science aspects of QIS. Our first goal is to develop new quantum algorithms and, more generally, new algorithmic techniques for developing quantum algorithms. We will explore a variety of new ideas: quantum walks, span programs, learning graphs, linear equation solving, computing by transforming quantum states.
Secondly, we will study the limits of quantum computing. We will look at various classes of computational problems and analyze what are the biggest speedups that quantum algorithms can achieve. We will also work on identifying computational problems which are hard even for a quantum computer. Such problems can serve as a basis for cryptography that would be secure against quantum computers.
Thirdly, the ideas from quantum information can lead to very surprising connections between different fields. The mathematical methods from quantum information can be applied to solve purely classical (non-quantum) problems in computer science. The ideas from computer science can be used to study the complexity of physical systems in quantum mechanics. We think that both of those directions have the potential for unexpected breakthroughs and we will pursue both of them."
Summary
"Quantum information science (QIS) is a young research area at the frontier of both computer science and physics. It studies what happens when we apply the principles of quantum mechanics to problems in computer science and information processing. This has resulted in many unexpected discoveries and opened up new frontiers.
Quantum algorithms (such as Shor’s factoring algorithm) can solve computational problems that are intractable for conventional computers. Quantum mechanics also enables quantum cryptography which provides an ultimate degree of security that cannot be achieved by conventional methods. These developments have generated an enormous interest both in building a quantum computer and exploring the mathematical foundations of quantum information.
We will study computer science aspects of QIS. Our first goal is to develop new quantum algorithms and, more generally, new algorithmic techniques for developing quantum algorithms. We will explore a variety of new ideas: quantum walks, span programs, learning graphs, linear equation solving, computing by transforming quantum states.
Secondly, we will study the limits of quantum computing. We will look at various classes of computational problems and analyze what are the biggest speedups that quantum algorithms can achieve. We will also work on identifying computational problems which are hard even for a quantum computer. Such problems can serve as a basis for cryptography that would be secure against quantum computers.
Thirdly, the ideas from quantum information can lead to very surprising connections between different fields. The mathematical methods from quantum information can be applied to solve purely classical (non-quantum) problems in computer science. The ideas from computer science can be used to study the complexity of physical systems in quantum mechanics. We think that both of those directions have the potential for unexpected breakthroughs and we will pursue both of them."
Max ERC Funding
1 360 980 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym Multiturbulence
Project Fractal-generated fluid flows: new flow concepts, technological innovation and fundamentals
Researcher (PI) John Christos Vassilicos
Host Institution (HI) IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary The unprecedented requirements set by the
dramatically evolving energy, environmental and climatic constraints
mean that industry needs new turbulent and vortical flow concepts for
new flow-technology solutions. The full potential and economic impact
of radically new industrial flow concepts can only be realised with a
step change (i) in the ways we generate and condition turbulent and
vortical flows and (ii) in our understanding of turbulent flow
dynamics and the consequent new predictive approaches.
Fractal/multiscale-generated turbulent and vortical flows are a new
family of flow concepts which I have recently pioneered and which
hold the following double promise:
(1) as the basis for a raft of conceptually new technological flow
solutions which can widely set entirely new industrial standards: this
proposal focuses on energy-efficient yet effective mixing devices;
low-power highly-enhanced heat exchangers; high-performance wings for
UAVs, cars, wind turbines; and realistic wind-field design
technologies for wind tunnel tests of tall structures such as
supertall skyscrapers and wind turbines.
(2) As the key new family of turbulent flows which will allow
hitherto impossible breakthroughs in our theory and modelling of fluid
turbulence.
I propose to realise this double promise by a combined
experimental-computational approach which will use cutting edge High
Performance Computing (HPC) and high-fidelity simulations based on a
new code which combines academic accuracy with industrial versatility
and which is specifically designed to perform very efficient massively
parallel computations on HPC systems. I will run these simulations in
tandem with a complementary wide range of wind tunnel, water channel
and other laboratory measurements in a two-way interaction between
laboratory and computer experiments which will ensure validations and
breadth of results.
Summary
The unprecedented requirements set by the
dramatically evolving energy, environmental and climatic constraints
mean that industry needs new turbulent and vortical flow concepts for
new flow-technology solutions. The full potential and economic impact
of radically new industrial flow concepts can only be realised with a
step change (i) in the ways we generate and condition turbulent and
vortical flows and (ii) in our understanding of turbulent flow
dynamics and the consequent new predictive approaches.
Fractal/multiscale-generated turbulent and vortical flows are a new
family of flow concepts which I have recently pioneered and which
hold the following double promise:
(1) as the basis for a raft of conceptually new technological flow
solutions which can widely set entirely new industrial standards: this
proposal focuses on energy-efficient yet effective mixing devices;
low-power highly-enhanced heat exchangers; high-performance wings for
UAVs, cars, wind turbines; and realistic wind-field design
technologies for wind tunnel tests of tall structures such as
supertall skyscrapers and wind turbines.
(2) As the key new family of turbulent flows which will allow
hitherto impossible breakthroughs in our theory and modelling of fluid
turbulence.
I propose to realise this double promise by a combined
experimental-computational approach which will use cutting edge High
Performance Computing (HPC) and high-fidelity simulations based on a
new code which combines academic accuracy with industrial versatility
and which is specifically designed to perform very efficient massively
parallel computations on HPC systems. I will run these simulations in
tandem with a complementary wide range of wind tunnel, water channel
and other laboratory measurements in a two-way interaction between
laboratory and computer experiments which will ensure validations and
breadth of results.
Max ERC Funding
2 317 265 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym MUSIC
Project Modeling and Simulation of Cancer Growth
Researcher (PI) Hector Gómez Díaz
Host Institution (HI) UNIVERSIDADE DA CORUNA
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Nowadays, the treatment of cancer is based on the so-called diagnostic paradigm. We feel that the shift from the traditional diagnostic paradigm to a predictive patient-specific one may lead to more effective therapies. Thus, the objective of this project is to introduce predictive models for cancer growth. These predictive models will take the form of mathematical models developed from first principles and the fundamental features of cancer biology. For these models to be useful in clinical practice, we will need to introduce new numerical algorithms that permit to obtain fast and accurate simulations based on patient-specific data.
We propose to develop mathematical models using the framework provided by the mixtures theory and the phase-field method. Our model will account for the growth of the tumor and the vasculature that develops around it, which is essential for the tumor to grow beyond a harmless limited size. We propose to develop new algorithms based on Isogeometric Analysis, which is a recent generalization of Finite Elements with several advantages. The use of Isogeometric Analysis will simplify the interface between medical images and the computational mesh, permitting to generate smooth basis functions necessary to approximate higher-order partial differential equations like those that govern cancer growth. Our modeling and simulation tools will be examined and validated by experimental and clinical observations. To accomplish this, we propose to use anonymized patient-specific data through several patient imaging modalities.
Arguably, the successful undertaking of this project, would have the potential to transform classical population/statistics-based treatments of cancer into patient-specific therapies. This would elevate mathematical modeling and simulation of cancer growth to a stage in which it can be used as a quantitatively accurate predictive tool with implications for clinical practice, clinical trial design, and outcome prediction.
Summary
Nowadays, the treatment of cancer is based on the so-called diagnostic paradigm. We feel that the shift from the traditional diagnostic paradigm to a predictive patient-specific one may lead to more effective therapies. Thus, the objective of this project is to introduce predictive models for cancer growth. These predictive models will take the form of mathematical models developed from first principles and the fundamental features of cancer biology. For these models to be useful in clinical practice, we will need to introduce new numerical algorithms that permit to obtain fast and accurate simulations based on patient-specific data.
We propose to develop mathematical models using the framework provided by the mixtures theory and the phase-field method. Our model will account for the growth of the tumor and the vasculature that develops around it, which is essential for the tumor to grow beyond a harmless limited size. We propose to develop new algorithms based on Isogeometric Analysis, which is a recent generalization of Finite Elements with several advantages. The use of Isogeometric Analysis will simplify the interface between medical images and the computational mesh, permitting to generate smooth basis functions necessary to approximate higher-order partial differential equations like those that govern cancer growth. Our modeling and simulation tools will be examined and validated by experimental and clinical observations. To accomplish this, we propose to use anonymized patient-specific data through several patient imaging modalities.
Arguably, the successful undertaking of this project, would have the potential to transform classical population/statistics-based treatments of cancer into patient-specific therapies. This would elevate mathematical modeling and simulation of cancer growth to a stage in which it can be used as a quantitatively accurate predictive tool with implications for clinical practice, clinical trial design, and outcome prediction.
Max ERC Funding
1 405 420 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym NANO-JETS
Project Next-generation polymer nanofibers: from electrified jets to hybrid optoelectronics
Researcher (PI) Dario Pisignano
Host Institution (HI) UNIVERSITA DEL SALENTO
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary "This project ultimately targets the application of polymer nanofibers in new, cavity-free lasers. To this aim, it wants to tackle the still unsolved problems of the process of electrospinning in terms of product control by the parameters affecting the dynamics of electrified jets. The electrospinning is based on the uniaxial elongation of polymeric jets with sufficient molecular entanglements, in presence of an intense electric field. It is a unique approach to produce nanofibers with high throughput. However, the process is still largely suboptimal, the most of nanofiber production being still carried out on an empirical basis. Though operationally simple, electrospinning is indeed complex as the behavior of electrified jets depends on many experimental variables making fully predictive approaches still missing. This project aims to elucidating and engineering the still unclear working principles of electrospinning by solutions incorporating active materials, with a tight synergy among modeling, fast-imaging characterization of electrified jets, and process engineering. Once optimized, nanofibers will offer an effective, well-controllable and cheap material for building new, cavity-free random laser systems. These architectures will enable enhanced miniaturization and portability, and enormously reduced realization costs. Electrospun nanofibers will offer a unique combination of optical properties, tuneable topography and light scattering effectiveness, thus being an exceptional bench tool to realize such new low-cost lasers, which is the second project goal. The accomplishment of these ambitious but well-defined objectives will have a groundbreaking, interdisciplinary impact, from materials science to physics of fluid jets in strong elongational conditions, from process to device engineering. The project will set-up a new, internationally-leading laboratory on polymer processing, making a decisive contribution to the establishment of scientific independence."
Summary
"This project ultimately targets the application of polymer nanofibers in new, cavity-free lasers. To this aim, it wants to tackle the still unsolved problems of the process of electrospinning in terms of product control by the parameters affecting the dynamics of electrified jets. The electrospinning is based on the uniaxial elongation of polymeric jets with sufficient molecular entanglements, in presence of an intense electric field. It is a unique approach to produce nanofibers with high throughput. However, the process is still largely suboptimal, the most of nanofiber production being still carried out on an empirical basis. Though operationally simple, electrospinning is indeed complex as the behavior of electrified jets depends on many experimental variables making fully predictive approaches still missing. This project aims to elucidating and engineering the still unclear working principles of electrospinning by solutions incorporating active materials, with a tight synergy among modeling, fast-imaging characterization of electrified jets, and process engineering. Once optimized, nanofibers will offer an effective, well-controllable and cheap material for building new, cavity-free random laser systems. These architectures will enable enhanced miniaturization and portability, and enormously reduced realization costs. Electrospun nanofibers will offer a unique combination of optical properties, tuneable topography and light scattering effectiveness, thus being an exceptional bench tool to realize such new low-cost lasers, which is the second project goal. The accomplishment of these ambitious but well-defined objectives will have a groundbreaking, interdisciplinary impact, from materials science to physics of fluid jets in strong elongational conditions, from process to device engineering. The project will set-up a new, internationally-leading laboratory on polymer processing, making a decisive contribution to the establishment of scientific independence."
Max ERC Funding
1 491 823 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym NanoTrigger
Project Triggerable nanomaterials to modulate cell activity
Researcher (PI) Lino Da Silva Ferreira
Host Institution (HI) CENTRO DE NEUROCIENCIAS E BIOLOGIACELULAR ASSOCIACAO
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary The advent of molecular reprogramming and the associated opportunities for personalised and therapeutic medicine requires the development of novel systems for on-demand delivery of reprogramming factors into cells in order to modulate their activity/identity. Such triggerable systems should allow precise control of the timing, duration, magnitude and spatial release of the reprogramming factors. Furthermore, the system should allow this control even in vivo, using non-invasive means. The present project aims at developing triggerable systems able to release efficiently reprogramming factors on demand. The potential of this technology will be tested in two settings: (i) in the reprogramming of somatic cells in vitro, and (ii) in the improvement of hematopoietic stem cell engraftment in vivo, at the bone marrow. The proposed research involves a team formed by engineers, chemists, biologists and is highly multidisciplinary in nature encompassing elements of engineering, chemistry, system biology, stem cell technology and nanomedicine.
Summary
The advent of molecular reprogramming and the associated opportunities for personalised and therapeutic medicine requires the development of novel systems for on-demand delivery of reprogramming factors into cells in order to modulate their activity/identity. Such triggerable systems should allow precise control of the timing, duration, magnitude and spatial release of the reprogramming factors. Furthermore, the system should allow this control even in vivo, using non-invasive means. The present project aims at developing triggerable systems able to release efficiently reprogramming factors on demand. The potential of this technology will be tested in two settings: (i) in the reprogramming of somatic cells in vitro, and (ii) in the improvement of hematopoietic stem cell engraftment in vivo, at the bone marrow. The proposed research involves a team formed by engineers, chemists, biologists and is highly multidisciplinary in nature encompassing elements of engineering, chemistry, system biology, stem cell technology and nanomedicine.
Max ERC Funding
1 699 320 €
Duration
Start date: 2012-11-01, End date: 2017-10-31
Project acronym NEMESIS
Project Novel Energy Materials: Engineering Science and Integrated Systems (NEMESIS)
Researcher (PI) Christopher Rhys Bowen
Host Institution (HI) UNIVERSITY OF BATH
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary The aim of NEMESIS is to establish a world leading research center in ferroelectric and piezoelectric materials for energy harvesting and energy generation. I will deliver cutting edge multi-disciplinary research encompassing materials, physics, chemistry and electrical engineering and develop ground breaking materials and structures for energy creation. The internationally leading research center will be dedicated to developing new and innovative solutions to generating and harvesting energy using novel materials at the macro- to nano-scale.
Key challenges and novel technical approaches are:
1. To create energy harvesting nano-generators to convert vibrations into electrical energy in hostile environments (e.g. wireless sensors in near engine applications).
2. To enable broadband energy harvesting to generate electrical energy from ambient vibrations which generally exhibit multiple time-dependent frequencies.
3. To produce Curie-temperature tuned nano-structured pyroelectrics to optimise the electrical energy scavenged from temperature fluctuations. To further enhance the energy generation I aim to couple thermal expansion and pyroelectric effects to produce a new class of thermal energy harvesting materials and systems.
4. To create nano-structured ferroelectric and piezoelectric materials for novel water-splitting applications. Two approaches will be considered, the use of the internal electrical fields present in ferroelectrics to prevent recombination of photo-excited electron-hole pairs and the electric charge generated on mechanically stressed piezoelectric nano-rods which convert water to hydrogen and oxygen.
Summary
The aim of NEMESIS is to establish a world leading research center in ferroelectric and piezoelectric materials for energy harvesting and energy generation. I will deliver cutting edge multi-disciplinary research encompassing materials, physics, chemistry and electrical engineering and develop ground breaking materials and structures for energy creation. The internationally leading research center will be dedicated to developing new and innovative solutions to generating and harvesting energy using novel materials at the macro- to nano-scale.
Key challenges and novel technical approaches are:
1. To create energy harvesting nano-generators to convert vibrations into electrical energy in hostile environments (e.g. wireless sensors in near engine applications).
2. To enable broadband energy harvesting to generate electrical energy from ambient vibrations which generally exhibit multiple time-dependent frequencies.
3. To produce Curie-temperature tuned nano-structured pyroelectrics to optimise the electrical energy scavenged from temperature fluctuations. To further enhance the energy generation I aim to couple thermal expansion and pyroelectric effects to produce a new class of thermal energy harvesting materials and systems.
4. To create nano-structured ferroelectric and piezoelectric materials for novel water-splitting applications. Two approaches will be considered, the use of the internal electrical fields present in ferroelectrics to prevent recombination of photo-excited electron-hole pairs and the electric charge generated on mechanically stressed piezoelectric nano-rods which convert water to hydrogen and oxygen.
Max ERC Funding
2 266 020 €
Duration
Start date: 2013-02-01, End date: 2018-12-31
Project acronym NetSat
Project Networked Pico-Satellite Distributed System Control
Researcher (PI) Klaus Schilling
Host Institution (HI) Zentrum fuer Telematik e.V.
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary A paradigm shift is emerging in spacecraft engineering from single, large, and multifunctional satellites towards cooperating groups of small satellites. This will enable innovative applications in areas like Earth observation or telecommunication. Related interdisciplinary research in the field of formation control and networked satellites are key challenges of this proposal.
Modern miniaturization techniques allow realization of satellites of continuously smaller masses, thus enabling cost-efficient implementation of distributed multi-satellite systems. In preparation my team has already realized two satellites at only 1 kg mass in the University Würzburg’s Ex¬perimental satellite (UWE) program, emphasizing crucial components for formation flying, like communication (UWE-1, launched 2005), attitude determination (UWE-2, launched 2009), and attitude control (UWE-3, launched 2013).
My vision for the proposed project is to demonstrate formation control of four pico-satellites in-orbit for the first time worldwide. To realize this objective, innovative multi-satellite networked orbit control based on relative position and attitude of each satellite is to be implemented in order to enable Earth observations based on multipoint measurements. Related sensor systems used in my laboratory in research for advanced characterization of teams of mobile robots will be transferred to the space environment. Breakthroughs are expected by combining optimal control strategies for coordination of relative motion with a robust flow of information in the network of satellites and ground stations, implemented via innovative use of ad-hoc networks in space. Based on my team’s expertise in implementing very small satellites, first time a system composed of four satellites will be launched to demonstrate autonomous distributed formation control in orbit. This research evaluation in space is expected to open up significant application potential for future distributed satellite system services in Earth observation.
Summary
A paradigm shift is emerging in spacecraft engineering from single, large, and multifunctional satellites towards cooperating groups of small satellites. This will enable innovative applications in areas like Earth observation or telecommunication. Related interdisciplinary research in the field of formation control and networked satellites are key challenges of this proposal.
Modern miniaturization techniques allow realization of satellites of continuously smaller masses, thus enabling cost-efficient implementation of distributed multi-satellite systems. In preparation my team has already realized two satellites at only 1 kg mass in the University Würzburg’s Ex¬perimental satellite (UWE) program, emphasizing crucial components for formation flying, like communication (UWE-1, launched 2005), attitude determination (UWE-2, launched 2009), and attitude control (UWE-3, launched 2013).
My vision for the proposed project is to demonstrate formation control of four pico-satellites in-orbit for the first time worldwide. To realize this objective, innovative multi-satellite networked orbit control based on relative position and attitude of each satellite is to be implemented in order to enable Earth observations based on multipoint measurements. Related sensor systems used in my laboratory in research for advanced characterization of teams of mobile robots will be transferred to the space environment. Breakthroughs are expected by combining optimal control strategies for coordination of relative motion with a robust flow of information in the network of satellites and ground stations, implemented via innovative use of ad-hoc networks in space. Based on my team’s expertise in implementing very small satellites, first time a system composed of four satellites will be launched to demonstrate autonomous distributed formation control in orbit. This research evaluation in space is expected to open up significant application potential for future distributed satellite system services in Earth observation.
Max ERC Funding
2 500 000 €
Duration
Start date: 2014-08-01, End date: 2019-07-31
Project acronym NGHCS
Project NGHCS: Creating the Next-Generation Mobile Human-Centered Systems
Researcher (PI) Vasiliki (Vana) Kalogeraki
Host Institution (HI) ATHENS UNIVERSITY OF ECONOMICS AND BUSINESS - RESEARCH CENTER
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Advances in sensor networking and the availability of every day, low-cost sensor enabled devices has led to integrating sensors to instrument the physical world in a variety of economically vital sectors of agriculture, transportation, healthcare, critical infrastructures and emergency response. At the same time, social computing is now undergoing a major revolution: social networks, as exemplified by Twitter or Facebook, have significantly changed the way humans interact with one another. We are now entering a new era where people and systems are becoming increasingly integrated and this development is effectively leading us to large-scale mobile human-centered systems. Our goal is to develop a comprehensive framework to simplify the development of mobile human-centered systems, as well as make them predictable and reliable. Our work has the following research thrusts: First, we develop techniques for dealing efficiently with dynamic unpredictable factors that such complex systems face, including dynamic workloads, unpredictable occurrence of events, real-time demands of applications, as well as user changes and urban dynamics. To achieve this, we will investigate the use of mathematical models to control the behavior of the applications in the absence of perfect system models and a priori information on load and human usage patterns. Second, we will develop the foundations needed to meet the end-to-end timeliness and reliability demands for the range of distributed systems that we will consider by developing novel techniques at different layers of the distributed environment and studying the tradeoffs involved. Third, we will develop general techniques to push computation and data storage as much as possible to the mobile devices, and to integrate participatory sensing and crowdsourcing techniques. The outcome of the proposed work is expected to have significant impact on a wide variety of distributed systems application domains.
Summary
Advances in sensor networking and the availability of every day, low-cost sensor enabled devices has led to integrating sensors to instrument the physical world in a variety of economically vital sectors of agriculture, transportation, healthcare, critical infrastructures and emergency response. At the same time, social computing is now undergoing a major revolution: social networks, as exemplified by Twitter or Facebook, have significantly changed the way humans interact with one another. We are now entering a new era where people and systems are becoming increasingly integrated and this development is effectively leading us to large-scale mobile human-centered systems. Our goal is to develop a comprehensive framework to simplify the development of mobile human-centered systems, as well as make them predictable and reliable. Our work has the following research thrusts: First, we develop techniques for dealing efficiently with dynamic unpredictable factors that such complex systems face, including dynamic workloads, unpredictable occurrence of events, real-time demands of applications, as well as user changes and urban dynamics. To achieve this, we will investigate the use of mathematical models to control the behavior of the applications in the absence of perfect system models and a priori information on load and human usage patterns. Second, we will develop the foundations needed to meet the end-to-end timeliness and reliability demands for the range of distributed systems that we will consider by developing novel techniques at different layers of the distributed environment and studying the tradeoffs involved. Third, we will develop general techniques to push computation and data storage as much as possible to the mobile devices, and to integrate participatory sensing and crowdsourcing techniques. The outcome of the proposed work is expected to have significant impact on a wide variety of distributed systems application domains.
Max ERC Funding
960 000 €
Duration
Start date: 2013-03-01, End date: 2019-02-28
Project acronym NOLEPRO
Project Nonlinear Eigenproblems for Data Analysis
Researcher (PI) Matthias Hein
Host Institution (HI) UNIVERSITAT DES SAARLANDES
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary In machine learning and exploratory data analysis, the major goal is the development
of solutions for the automatic and efficient extraction of knowledge from data. This
ability is key for further progress in science and engineering. A large class of
data analysis methods is based on linear eigenproblems. While linear eigenproblems are
well studied, and a large part of numerical linear algebra is dedicated to the efficient
calculation of eigenvectors of all kinds of structured matrices, they are limited in their
modeling capabilities. Important properties like robustness against outliers
and sparsity of the eigenvectors are impossible to realize. In turn, we have shown recently
that many problems in data analysis can be naturally formulated as nonlinear eigenproblems.
In order to use the rich structure of nonlinear eigenproblems with an ease
similar to that of linear eigenproblems, a major goal of this proposal is to develop a general
framework for the computation of nonlinear eigenvectors. Furthermore, the great potential of nonlinear eigenproblems will be explored in various application areas. As the scope of nonlinear eigenproblems goes far beyond data analysis, this project will have major impact not only in machine learning and its use in computer vision, bioinformatics, and information retrieval, but also in other areas of the natural sciences.
Summary
In machine learning and exploratory data analysis, the major goal is the development
of solutions for the automatic and efficient extraction of knowledge from data. This
ability is key for further progress in science and engineering. A large class of
data analysis methods is based on linear eigenproblems. While linear eigenproblems are
well studied, and a large part of numerical linear algebra is dedicated to the efficient
calculation of eigenvectors of all kinds of structured matrices, they are limited in their
modeling capabilities. Important properties like robustness against outliers
and sparsity of the eigenvectors are impossible to realize. In turn, we have shown recently
that many problems in data analysis can be naturally formulated as nonlinear eigenproblems.
In order to use the rich structure of nonlinear eigenproblems with an ease
similar to that of linear eigenproblems, a major goal of this proposal is to develop a general
framework for the computation of nonlinear eigenvectors. Furthermore, the great potential of nonlinear eigenproblems will be explored in various application areas. As the scope of nonlinear eigenproblems goes far beyond data analysis, this project will have major impact not only in machine learning and its use in computer vision, bioinformatics, and information retrieval, but also in other areas of the natural sciences.
Max ERC Funding
1 271 992 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym NONEQ.STEEL
Project Controlling Non-Equilibrium in Steels
Researcher (PI) Maria Jesus Santofimia Navarro
Host Institution (HI) TECHNISCHE UNIVERSITEIT DELFT
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Stronger and more ductile steels are increasingly demanded for advanced applications. Latest investigations show that nanostructured steels formed by non-equilibrium phases increasing strength, such as martensite and bainite, and enhancing strain hardening, such as austenite, fulfil these demands with outstanding performance.
In the last few years, I have observed that non-equilibrium phases strongly affect each other’s formation and stability, with effects on the kinetics of the microstructure development. Thus, I theoretically and experimentally proved that carbon enrichment of austenite, essential for its stability at room temperature, occurs at a high rate via diffusion from martensite. Moreover, I showed that martensite triggers bainite formation, which significantly increases bainite kinetics. I believe that these interactions between non-equilibrium phases constitute a revolutionary tool for the development of nanostructured steels in the future.
This project addresses a new concept to create novel nanostructured steels in which the microstructure development is controlled by interactions between non-equilibrium phases. This innovative idea opens an unprecedented approach for the design of metallic alloys. Since interactions between phases affect each other’s formation and stability, the project focus on the fundamental study of nucleation and growth of non-equilibrium phases as well as on the analysis of interactions. Investigations will combine the integrated application of advanced experimental techniques with atomic and micro scale analysis of structures by simulations. The project continues with the local analysis of the effect of non-equilibrium phases on the mechanical properties of the steels. The identification and explanations of mechanisms will allow the creation of new nanostructured steels based on non-equilibrium phases’ interactions.
Summary
Stronger and more ductile steels are increasingly demanded for advanced applications. Latest investigations show that nanostructured steels formed by non-equilibrium phases increasing strength, such as martensite and bainite, and enhancing strain hardening, such as austenite, fulfil these demands with outstanding performance.
In the last few years, I have observed that non-equilibrium phases strongly affect each other’s formation and stability, with effects on the kinetics of the microstructure development. Thus, I theoretically and experimentally proved that carbon enrichment of austenite, essential for its stability at room temperature, occurs at a high rate via diffusion from martensite. Moreover, I showed that martensite triggers bainite formation, which significantly increases bainite kinetics. I believe that these interactions between non-equilibrium phases constitute a revolutionary tool for the development of nanostructured steels in the future.
This project addresses a new concept to create novel nanostructured steels in which the microstructure development is controlled by interactions between non-equilibrium phases. This innovative idea opens an unprecedented approach for the design of metallic alloys. Since interactions between phases affect each other’s formation and stability, the project focus on the fundamental study of nucleation and growth of non-equilibrium phases as well as on the analysis of interactions. Investigations will combine the integrated application of advanced experimental techniques with atomic and micro scale analysis of structures by simulations. The project continues with the local analysis of the effect of non-equilibrium phases on the mechanical properties of the steels. The identification and explanations of mechanisms will allow the creation of new nanostructured steels based on non-equilibrium phases’ interactions.
Max ERC Funding
1 482 011 €
Duration
Start date: 2012-10-01, End date: 2018-03-31
Project acronym NOVIB
Project The Nonlinear Tuned Vibration Absorber
Researcher (PI) Gaetan Kerschen
Host Institution (HI) UNIVERSITE DE LIEGE
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary "Even after more than one century of flight, both civil and military aircraft are still plagued by major vibration problems. A well-known example is the external-store induced flutter of the F-16 fighter aircraft. Such dynamical phenomena, commonly known as aeroelastic instabilities, result from the transfer of energy from the free stream to the structure and can lead to limit cycle oscillations, a phenomenon with no linear counterpart. Since nonlinear dynamical systems theory is not yet mature, the inherently nonlinear nature of these oscillations renders their mitigation a particularly difficult problem. The only practical solution to date is to limit aircraft flight envelope to regions where these instabilities are not expected to occur, as verified by intensive and expensive flight campaigns. This limitation results in a severe decrease in both aircraft efficiency and performance.
At the heart of this project is a fundamental change in paradigm: although nonlinearity is usually seen as an enemy, I propose to control - and even suppress - aeroelastic instability through the intentional use of nonlinearity. This approach has the potential to bring about a major change in aircraft design and will be achieved thanks to the development of the nonlinear tuned vibration absorber, a new, rigorous nonlinear counterpart of the linear tuned vibration absorber. This work represents a number of significant challenges, because the novel functionalities brought by the intentional use of nonlinearity can be accompanied by adverse nonlinear dynamical effects. The successful mitigation of these unwanted nonlinear effects will be a major objective of our proposed research; it will require achieving both theoretical and technical advances to make it possible. A specific effort will be made to demonstrate experimentally the theoretical findings of this research with extensive wind tunnel testing and practical implementation of the nonlinear tuned vibration absorber.
Finally, nonlinear instabilities such as limit cycle oscillations can be found in a number of non-aircraft applications including in bridges, automotive disc brakes and machine tools. The nonlinear tuned vibration absorber could also find uses in resolving problems in these applications, thus ensuring the generic character of the project."
Summary
"Even after more than one century of flight, both civil and military aircraft are still plagued by major vibration problems. A well-known example is the external-store induced flutter of the F-16 fighter aircraft. Such dynamical phenomena, commonly known as aeroelastic instabilities, result from the transfer of energy from the free stream to the structure and can lead to limit cycle oscillations, a phenomenon with no linear counterpart. Since nonlinear dynamical systems theory is not yet mature, the inherently nonlinear nature of these oscillations renders their mitigation a particularly difficult problem. The only practical solution to date is to limit aircraft flight envelope to regions where these instabilities are not expected to occur, as verified by intensive and expensive flight campaigns. This limitation results in a severe decrease in both aircraft efficiency and performance.
At the heart of this project is a fundamental change in paradigm: although nonlinearity is usually seen as an enemy, I propose to control - and even suppress - aeroelastic instability through the intentional use of nonlinearity. This approach has the potential to bring about a major change in aircraft design and will be achieved thanks to the development of the nonlinear tuned vibration absorber, a new, rigorous nonlinear counterpart of the linear tuned vibration absorber. This work represents a number of significant challenges, because the novel functionalities brought by the intentional use of nonlinearity can be accompanied by adverse nonlinear dynamical effects. The successful mitigation of these unwanted nonlinear effects will be a major objective of our proposed research; it will require achieving both theoretical and technical advances to make it possible. A specific effort will be made to demonstrate experimentally the theoretical findings of this research with extensive wind tunnel testing and practical implementation of the nonlinear tuned vibration absorber.
Finally, nonlinear instabilities such as limit cycle oscillations can be found in a number of non-aircraft applications including in bridges, automotive disc brakes and machine tools. The nonlinear tuned vibration absorber could also find uses in resolving problems in these applications, thus ensuring the generic character of the project."
Max ERC Funding
1 316 440 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym PACE
Project Programming Abstractions for Applications in Cloud Environments
Researcher (PI) Ermira Mezini
Host Institution (HI) TECHNISCHE UNIVERSITAT DARMSTADT
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary "Cloud computing is changing our perception of computing: The Internet is becoming the computer and the software: (a) vast data centers and computing power are available via the Internet (infrastructure as a service), (b) software is available via the Internet as a service (software as a service). Building on the promise of unlimited processing/storage power, applications today process big amounts of data scattered over the cloud and react to events happening across the cloud. Software services must be both standard components to pay off for their provider and highly configurable and customizable to serve competitive needs of multiple tenants.
Developing such applications is challenging, given the predominant programming technology, whose fundamental abstractions were conceived for the traditional computing model.
Existing abstractions are laid out to process individual data/events. Making the complexity of applications processing big data/events manageable requires abstractions to intentionally express high-level correlations between data/events, freeing the programmer from the job of tracking the data and keeping tabs on relevant events across a cloud. Existing abstractions also fail to reconcile software reuse and extensibility at the level of large-scale software services.
PACE will deliver first-class linguistic abstractions for expressing sophisticated correlations between data/events to be used as primitives to express high-level functionality. Armed with them, programmers will be relieved from micromanaging data/events and can turn their attention to what the cloud has to offer. Applications become easier to understand, maintain, evolve and more amenable to automated reasoning and sophisticated optimizations. PACE will also deliver language concepts for large-scale modularity, extensibility, and adaptability for capturing highly polymorphic software services."
Summary
"Cloud computing is changing our perception of computing: The Internet is becoming the computer and the software: (a) vast data centers and computing power are available via the Internet (infrastructure as a service), (b) software is available via the Internet as a service (software as a service). Building on the promise of unlimited processing/storage power, applications today process big amounts of data scattered over the cloud and react to events happening across the cloud. Software services must be both standard components to pay off for their provider and highly configurable and customizable to serve competitive needs of multiple tenants.
Developing such applications is challenging, given the predominant programming technology, whose fundamental abstractions were conceived for the traditional computing model.
Existing abstractions are laid out to process individual data/events. Making the complexity of applications processing big data/events manageable requires abstractions to intentionally express high-level correlations between data/events, freeing the programmer from the job of tracking the data and keeping tabs on relevant events across a cloud. Existing abstractions also fail to reconcile software reuse and extensibility at the level of large-scale software services.
PACE will deliver first-class linguistic abstractions for expressing sophisticated correlations between data/events to be used as primitives to express high-level functionality. Armed with them, programmers will be relieved from micromanaging data/events and can turn their attention to what the cloud has to offer. Applications become easier to understand, maintain, evolve and more amenable to automated reasoning and sophisticated optimizations. PACE will also deliver language concepts for large-scale modularity, extensibility, and adaptability for capturing highly polymorphic software services."
Max ERC Funding
2 280 998 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym PARAPPROX
Project Parameterized Approximation
Researcher (PI) Saket Saurabh
Host Institution (HI) UNIVERSITETET I BERGEN
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "The main goal of this project is to lay the foundations of a ``non-polynomial time theory of approximation"" -- the Parameterized Approximation for NP-hard optimization problems. A combination that will use the salient features of Approximation Algorithms and
Parameterized Complexity. In the former, one relaxes the requirement of finding an optimum solution. In the latter, one relaxes the requirement of finishing in polynomial time by restricting the
combinatorial explosion in the running time to a parameter that for reasonable inputs is much smaller than the input size. This project will explore the following fundamental question:
Approximation Algorithms + Parameterized Complexity=?
New techniques will be developed that will simultaneously utilize the notions of relaxed time complexity and accuracy and thereby make problems for which both these approaches have failed independently, tractable. It is however conceivable that for some problems even this combined approach may not succeed. But in those situations we will glean valuable insight into the reasons for failure. In parallel to algorithmic studies, an intractability theory will be
developed which will provide the theoretical framework to specify the extent to which this approach might work. Thus, on one hand the project will give rise to algorithms that will have impact beyond the boundaries of computer science and on the other hand it will lead to a complexity theory that will go beyond the established notions of intractability. Both these aspects of my project are groundbreaking -- the new theory will transcend our current ideas of
efficient approximation and thereby raise the state of the art to a new level."
Summary
"The main goal of this project is to lay the foundations of a ``non-polynomial time theory of approximation"" -- the Parameterized Approximation for NP-hard optimization problems. A combination that will use the salient features of Approximation Algorithms and
Parameterized Complexity. In the former, one relaxes the requirement of finding an optimum solution. In the latter, one relaxes the requirement of finishing in polynomial time by restricting the
combinatorial explosion in the running time to a parameter that for reasonable inputs is much smaller than the input size. This project will explore the following fundamental question:
Approximation Algorithms + Parameterized Complexity=?
New techniques will be developed that will simultaneously utilize the notions of relaxed time complexity and accuracy and thereby make problems for which both these approaches have failed independently, tractable. It is however conceivable that for some problems even this combined approach may not succeed. But in those situations we will glean valuable insight into the reasons for failure. In parallel to algorithmic studies, an intractability theory will be
developed which will provide the theoretical framework to specify the extent to which this approach might work. Thus, on one hand the project will give rise to algorithms that will have impact beyond the boundaries of computer science and on the other hand it will lead to a complexity theory that will go beyond the established notions of intractability. Both these aspects of my project are groundbreaking -- the new theory will transcend our current ideas of
efficient approximation and thereby raise the state of the art to a new level."
Max ERC Funding
1 690 000 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym PERCY
Project Personal Cryptography
Researcher (PI) Jan Leonhard Camenisch
Host Institution (HI) IBM RESEARCH GMBH
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary "The amount of personal data stored in digital form has grown tremendously. All aspects of our lives are concerned. Our data include family pictures, insurance documents, bills and receipts, health records, cryptographic keys, electronic identities, certificates, and passwords. We store and process them on several personal devices as well as in the cloud via services such as Flickr or Facebook. Managing these data is challenging: they have to be updated, backed up, synchronised across devices, and shared. In case of emergency, health records must be accessible to doctors or designated family members. Many of these data are sensitive, but adequately protecting them is virtually impossible for private users with current tools.
Encrypting data makes managing them only harder. It destroys much of the functionality that users have come to expect such as synchronising and sharing; mismanagement of encryption keys might even render data illegible to the owner himself.
Our goal is to develop fundamentally new cryptographic primitives, protocols, and policy languages that let human users deal with cryptographic keys and encrypted personal data. We will invent mechanisms that 1) enable humans to securely store and retrieve cryptographic keys based on a single human-memorisable password, on biometrics, on hardware tokens; 2) enable end users to manage their various cryptographic keys and encrypted data via these keys; and 3) enable users and cloud hosts to perform useful operations on encrypted data without needing to decrypt. Our mechanisms will run on resource-constrained devices, i.e., they will be efficient and yet secure in the sense that they provide security guarantees, especially in the presence of untrusted cloud hosts.
Our basic cryptographic research aims at infusing growth of a research community around protection mechanisms for end-user keys and data and to initiate follow-up collaborative projects to deploy our theoretical results in the real world"
Summary
"The amount of personal data stored in digital form has grown tremendously. All aspects of our lives are concerned. Our data include family pictures, insurance documents, bills and receipts, health records, cryptographic keys, electronic identities, certificates, and passwords. We store and process them on several personal devices as well as in the cloud via services such as Flickr or Facebook. Managing these data is challenging: they have to be updated, backed up, synchronised across devices, and shared. In case of emergency, health records must be accessible to doctors or designated family members. Many of these data are sensitive, but adequately protecting them is virtually impossible for private users with current tools.
Encrypting data makes managing them only harder. It destroys much of the functionality that users have come to expect such as synchronising and sharing; mismanagement of encryption keys might even render data illegible to the owner himself.
Our goal is to develop fundamentally new cryptographic primitives, protocols, and policy languages that let human users deal with cryptographic keys and encrypted personal data. We will invent mechanisms that 1) enable humans to securely store and retrieve cryptographic keys based on a single human-memorisable password, on biometrics, on hardware tokens; 2) enable end users to manage their various cryptographic keys and encrypted data via these keys; and 3) enable users and cloud hosts to perform useful operations on encrypted data without needing to decrypt. Our mechanisms will run on resource-constrained devices, i.e., they will be efficient and yet secure in the sense that they provide security guarantees, especially in the presence of untrusted cloud hosts.
Our basic cryptographic research aims at infusing growth of a research community around protection mechanisms for end-user keys and data and to initiate follow-up collaborative projects to deploy our theoretical results in the real world"
Max ERC Funding
2 467 700 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym PICO
Project Pico: no more passwords
Researcher (PI) Francesco Stajano
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Passwords, passphrases and PINs have become a usability disaster. Even though they are convenient for implementers, they have been over-exploited, and are now increasingly unmanageable for end users, as well as insecure. The demands placed on users (passwords that are unguessable, all different, regularly changed and never written down) are no longer reasonable now that each person has to manage dozens of passwords. This project will develop and evaluate an alternative design based on a hardware token called Pico that relieves the user from having to remember passwords and PINs. Besides relieving the user from memorization efforts, the Pico solution scales to thousands of credentials, provides ``continuous authentication'' and is resistant to brute force guessing, dictionary attacks, phishing and keylogging. To promote adoption and interoperability, the Pico design has not been patented. The Principal Investigator has been invited to speak about Pico in three continents (including at USENIX Security 2011) since releasing the first draft of his design paper.
Summary
Passwords, passphrases and PINs have become a usability disaster. Even though they are convenient for implementers, they have been over-exploited, and are now increasingly unmanageable for end users, as well as insecure. The demands placed on users (passwords that are unguessable, all different, regularly changed and never written down) are no longer reasonable now that each person has to manage dozens of passwords. This project will develop and evaluate an alternative design based on a hardware token called Pico that relieves the user from having to remember passwords and PINs. Besides relieving the user from memorization efforts, the Pico solution scales to thousands of credentials, provides ``continuous authentication'' and is resistant to brute force guessing, dictionary attacks, phishing and keylogging. To promote adoption and interoperability, the Pico design has not been patented. The Principal Investigator has been invited to speak about Pico in three continents (including at USENIX Security 2011) since releasing the first draft of his design paper.
Max ERC Funding
1 350 000 €
Duration
Start date: 2013-02-01, End date: 2017-12-31
Project acronym PROSECUTOR
Project Programming Language-Based Security To Rescue
Researcher (PI) Andreas Sabelfeld
Host Institution (HI) CHALMERS TEKNISKA HOEGSKOLA AB
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary It is alarming that the society's critical infrastructures are not
fully prepared to meet the challenge of information security. Modern
computing systems are increasingly extensible, inter-connected, and
mobile. However, exactly these trends make systems more vulnerable to
attacks. A particularly exposed infrastructure is the world-wide web
infrastructure, where allowing the mere possibility of fetching a web
page opens up opportunities for delivering potentially malicious
executable content past current security mechanisms such as
firewalls. A critical challenge is to secure the computing
infrastructures without losing the benefits of the trends.
It is our firm belief that attacks will continue succeeding unless a
fundamental security solution, one that focuses on the security of the
actual applications (code), is devised. To this end, we are convinced
that application-level security can be best enforced, *by
construction*, at the level of programming languages.
ProSecuToR will develop the technology of *programming language-based
security* in order to secure computing infrastructures.
Language-based security is an innovative approach for enforcing
security by construction. The project will deliver policies and
enforcement mechanisms for protecting who can see and who can modify
sensitive data. Security policies will be expressible by the
programmer at the construction phase. We will devise a policy
framework capable of expressing fine-grained application-level
security policies. We will build practical enforcement mechanisms to
enforce the policies for expressive languages. Enforcement mechanisms
will be fully automatic, preventing dangerous programs from executing
whenever there is a possibility of compromising desired security
properties. The practicality will be demonstrated by building robust
web applications. ProSecuToR is expected to lead to breakthroughs in
*securing web mashups* and *end-to-end web application security*.
Summary
It is alarming that the society's critical infrastructures are not
fully prepared to meet the challenge of information security. Modern
computing systems are increasingly extensible, inter-connected, and
mobile. However, exactly these trends make systems more vulnerable to
attacks. A particularly exposed infrastructure is the world-wide web
infrastructure, where allowing the mere possibility of fetching a web
page opens up opportunities for delivering potentially malicious
executable content past current security mechanisms such as
firewalls. A critical challenge is to secure the computing
infrastructures without losing the benefits of the trends.
It is our firm belief that attacks will continue succeeding unless a
fundamental security solution, one that focuses on the security of the
actual applications (code), is devised. To this end, we are convinced
that application-level security can be best enforced, *by
construction*, at the level of programming languages.
ProSecuToR will develop the technology of *programming language-based
security* in order to secure computing infrastructures.
Language-based security is an innovative approach for enforcing
security by construction. The project will deliver policies and
enforcement mechanisms for protecting who can see and who can modify
sensitive data. Security policies will be expressible by the
programmer at the construction phase. We will devise a policy
framework capable of expressing fine-grained application-level
security policies. We will build practical enforcement mechanisms to
enforce the policies for expressive languages. Enforcement mechanisms
will be fully automatic, preventing dangerous programs from executing
whenever there is a possibility of compromising desired security
properties. The practicality will be demonstrated by building robust
web applications. ProSecuToR is expected to lead to breakthroughs in
*securing web mashups* and *end-to-end web application security*.
Max ERC Funding
1 500 000 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym QCC
Project Quantum Communication and Cryptography
Researcher (PI) Iordanis Kerenidis
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Quantum Information Processing has the potential to revolutionize the future of information technologies. My long-term vision is a network of quantum and classical devices, where individual agents have the ability to communicate efficiently in a variety of ways with trusted and untrusted parties and securely delegate computational tasks to a number of untrusted large-scale quantum computing servers. In such an interconnected world, the notion of security against malicious adversaries is an imperative. In addition, the interaction between agents must remain efficient and it is important to provide the agents with incentives for honest behaviour. The realization of such a complex network of classical and quantum communication must rely on a solid theoretical foundation that nevertheless is able to foresee and handle the intricacies of real-life implementations.
The targeted breakthrough of this proposal is to set the benchmark, both theoretically as well as experimentally, of some necessary communication components for such a hybrid network. The concrete objectives of our project are to: design novel cryptographic primitives as a powerful quantum mechanical toolkit for the quantum network infrastructure; study quantum communication and games in order to minimize the communication overhead and guarantee the honest behaviour of rational agents; enhance the understanding of the fundamentals of quantum mechanics and of classical complexity theory through the lens of quantum information and cryptography; study the security of cryptographic primitives in realistic conditions and use state-of-the-art photonic systems to implement complex cryptographic primitives.
An ERC Starting Grant will enable me to reach the above objectives through the creation and coordination of an independent quantum cryptography group that will raise the level of competence and competitiveness of the EU and will provide methods and applications essential for the future of information technology."
Summary
"Quantum Information Processing has the potential to revolutionize the future of information technologies. My long-term vision is a network of quantum and classical devices, where individual agents have the ability to communicate efficiently in a variety of ways with trusted and untrusted parties and securely delegate computational tasks to a number of untrusted large-scale quantum computing servers. In such an interconnected world, the notion of security against malicious adversaries is an imperative. In addition, the interaction between agents must remain efficient and it is important to provide the agents with incentives for honest behaviour. The realization of such a complex network of classical and quantum communication must rely on a solid theoretical foundation that nevertheless is able to foresee and handle the intricacies of real-life implementations.
The targeted breakthrough of this proposal is to set the benchmark, both theoretically as well as experimentally, of some necessary communication components for such a hybrid network. The concrete objectives of our project are to: design novel cryptographic primitives as a powerful quantum mechanical toolkit for the quantum network infrastructure; study quantum communication and games in order to minimize the communication overhead and guarantee the honest behaviour of rational agents; enhance the understanding of the fundamentals of quantum mechanics and of classical complexity theory through the lens of quantum information and cryptography; study the security of cryptographic primitives in realistic conditions and use state-of-the-art photonic systems to implement complex cryptographic primitives.
An ERC Starting Grant will enable me to reach the above objectives through the creation and coordination of an independent quantum cryptography group that will raise the level of competence and competitiveness of the EU and will provide methods and applications essential for the future of information technology."
Max ERC Funding
980 640 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym QCLS
Project Quantum Computation, Logic, and Security
Researcher (PI) Bartholomeus Paulus Franciscus Jacobs
Host Institution (HI) STICHTING KATHOLIEKE UNIVERSITEIT
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary Quantum computing involves a new computational paradigm based on the
laws of quantum mechanics. It uses qubits, which are superpositions of
ordinary bits, and exploits the `strange' quantum phenomena like
entanglement of qubits. It promises new forms of very fast,
distributed computation. First applications are now appearing in
computer security, based on the manipulation of individual qubits. The
realization of large scale quantum computing, involving multitudes of
qubits is still a technological challenge, beyond the scope of this
proposal.
Quantum computing originated, understandably, in physics. This
project abstracts from this physical level and will transform and
develop the relevant phenomena at a mathematical level so that they
can be integrated in computational models, logics and formal methods
used in computer science. The project will use the unifying language
and tools of category theory, which are working as a ``Rosetta Stone''
in the multi-disciplinary area of computer science, mathematics and
physics. The project will clarify the subtle but fundamental
difference between quantum computation/logic on the one hand and
probabilistic and non-deterministic computation/logic on the
other. This should result in (programming) logics and models that are
clear and usable for computer scientists, in particular in the area of
(formal methods for) computer security. Overall, the proposal aims to
ensure that the discipline of computer science is well-prepared for
the (approaching) moment when quantum computing becomes a reality.
Summary
Quantum computing involves a new computational paradigm based on the
laws of quantum mechanics. It uses qubits, which are superpositions of
ordinary bits, and exploits the `strange' quantum phenomena like
entanglement of qubits. It promises new forms of very fast,
distributed computation. First applications are now appearing in
computer security, based on the manipulation of individual qubits. The
realization of large scale quantum computing, involving multitudes of
qubits is still a technological challenge, beyond the scope of this
proposal.
Quantum computing originated, understandably, in physics. This
project abstracts from this physical level and will transform and
develop the relevant phenomena at a mathematical level so that they
can be integrated in computational models, logics and formal methods
used in computer science. The project will use the unifying language
and tools of category theory, which are working as a ``Rosetta Stone''
in the multi-disciplinary area of computer science, mathematics and
physics. The project will clarify the subtle but fundamental
difference between quantum computation/logic on the one hand and
probabilistic and non-deterministic computation/logic on the
other. This should result in (programming) logics and models that are
clear and usable for computer scientists, in particular in the area of
(formal methods for) computer security. Overall, the proposal aims to
ensure that the discipline of computer science is well-prepared for
the (approaching) moment when quantum computing becomes a reality.
Max ERC Funding
2 500 000 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym REE-CYCLE
Project Rare Earth Element reCYCLing with Low harmful Emissions
Researcher (PI) Thomas Nicolas Zemb
Host Institution (HI) COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary "It is a matter of strategic independence for Europe to urgently find processes taking into account environmental and economic issues, when mining and recycling rare earths. Currently THERE ARE NO SUCH INDUSTRIAL PROCESS AVAILABLE and 0% WASTE RECYCLING of RARE EARTH ELEMENTS (REE). Plus, 97% of the mining operations are performed in China, hence representing a major Sword of Damoclès for the rest of the world’s economy.
We propose to develop a new, cost effective and environmentally friendly REE recycling process. We will achieve this: (i) by enabling, for the first time ever, the fast measurement of free energy of mass transfer between complex fluids; hence it will now be possible to explore an extensive number of process formulations and phase diagrams (such a study usually takes years but will then be performed in a matter of days); (ii) develop predictive models of ion separation including the effect of long-range interactions between metal cations and micelles; (iii) by using the experimental results and prediction tools developed, to design an advanced & environmentally friendly process formulations and pilot plant; (iv) by enhancing the extraction kinetics and selectivity, by implementing a new, innovative and selective triggering cation exchange process step (ca. the exchange kinetics of a cation will be greatly enhance when compared to another one). This will represent a major breakthrough in the field of transfer methods between complex fluids.
An expected direct consequence of REE-CYCLE will be that acids’ volumes and other harmful process wastes, will be reduced by one to two orders of magnitude. Furthermore, this new understanding of mechanisms involved in selective ion transfer should open new recycling possibilities and pave the way to economical recovery of metals from a very rapidly growing “mine”, i.e. the diverse metal containing “wastes” generated by used Li-ion batteries, super-capacitors, supported catalysts and fuel cells."
Summary
"It is a matter of strategic independence for Europe to urgently find processes taking into account environmental and economic issues, when mining and recycling rare earths. Currently THERE ARE NO SUCH INDUSTRIAL PROCESS AVAILABLE and 0% WASTE RECYCLING of RARE EARTH ELEMENTS (REE). Plus, 97% of the mining operations are performed in China, hence representing a major Sword of Damoclès for the rest of the world’s economy.
We propose to develop a new, cost effective and environmentally friendly REE recycling process. We will achieve this: (i) by enabling, for the first time ever, the fast measurement of free energy of mass transfer between complex fluids; hence it will now be possible to explore an extensive number of process formulations and phase diagrams (such a study usually takes years but will then be performed in a matter of days); (ii) develop predictive models of ion separation including the effect of long-range interactions between metal cations and micelles; (iii) by using the experimental results and prediction tools developed, to design an advanced & environmentally friendly process formulations and pilot plant; (iv) by enhancing the extraction kinetics and selectivity, by implementing a new, innovative and selective triggering cation exchange process step (ca. the exchange kinetics of a cation will be greatly enhance when compared to another one). This will represent a major breakthrough in the field of transfer methods between complex fluids.
An expected direct consequence of REE-CYCLE will be that acids’ volumes and other harmful process wastes, will be reduced by one to two orders of magnitude. Furthermore, this new understanding of mechanisms involved in selective ion transfer should open new recycling possibilities and pave the way to economical recovery of metals from a very rapidly growing “mine”, i.e. the diverse metal containing “wastes” generated by used Li-ion batteries, super-capacitors, supported catalysts and fuel cells."
Max ERC Funding
2 255 515 €
Duration
Start date: 2013-07-01, End date: 2018-06-30
Project acronym RETURN
Project RETURN – Rethinking Tunnelling in Urban Neighbourhoods
Researcher (PI) Debra Fern Laefer
Host Institution (HI) UNIVERSITY COLLEGE DUBLIN, NATIONAL UNIVERSITY OF IRELAND, DUBLIN
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary This project addresses important challenges at the forefront of geotechnical engineering and building conservation by introducing an entirely new workflow and largely unexploited data source for the predic-tion of building damage from tunnel-induced subsidence. The project will also make fundamental and ground-breaking advances in the collection and processing of city-scale, aerial laser scanning by avoiding any reliance on existing data for building location identification, respective data affiliation, or building fea-ture recognition. This will create a set of techniques that are robust, scalable, and widely applicable to a broad range of communities with unreinforced masonry buildings. This will also lay the groundwork to rapidly generate and deploy city-scale, computational models for emergency management and disaster re-sponse, as well as for the growing field of environmental modelling.
Summary
This project addresses important challenges at the forefront of geotechnical engineering and building conservation by introducing an entirely new workflow and largely unexploited data source for the predic-tion of building damage from tunnel-induced subsidence. The project will also make fundamental and ground-breaking advances in the collection and processing of city-scale, aerial laser scanning by avoiding any reliance on existing data for building location identification, respective data affiliation, or building fea-ture recognition. This will create a set of techniques that are robust, scalable, and widely applicable to a broad range of communities with unreinforced masonry buildings. This will also lay the groundwork to rapidly generate and deploy city-scale, computational models for emergency management and disaster re-sponse, as well as for the growing field of environmental modelling.
Max ERC Funding
1 500 000 €
Duration
Start date: 2013-01-01, End date: 2016-12-31
Project acronym RoMoL
Project Riding on Moore's Law
Researcher (PI) Mateo Valero Cortes
Host Institution (HI) BARCELONA SUPERCOMPUTING CENTER - CENTRO NACIONAL DE SUPERCOMPUTACION
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary The most common interpretation of Moore's Law is that the number of components on a chip and accordingly the computer performance doubles every two years. At the end of the 20th century, when clock frequencies stagnated at ~3 GHz, and instruction level parallelism reached the phase of diminishing returns, industry turned towards multiprocessors, and thread level parallelism. However, too much of the technological complexity of multicore architectures is exposed to the programmers, leading to a software development nightmare.
We propose a radically new concept of parallel computer architectures, using a higher level of abstraction, Instead of expressing algorithms as a sequence of instruction, we will group instructions into higher-level tasks that will be automatically managed by the architecture, much in the same way superscalar processors managed instruction level parallelism.
We envision a holistic approach where the parallel architecture is partially implemented as a software runtime, and the reminder in hardware. The hardware gains the freedom to deliver performance at the expense of additional complexity, as long as it provides the required support primitives for the runtime software to hide complexity from the programmer. Moreover, it offers a single solution that could solve most of the problems we encounter in the current approaches: handling parallelism, the memory wall, the power wall, and the reliability wall in a wide range of application domains from mobile up to supercomputers .
We will focus our research on a most efficient form of multicore architecture coupled with vector accelerators for exploiting both thread and data level parallelism.
All together, this novel approach toward future parallel architectures is the way to ensure continued performance improvements, getting us out of the technological mess that computers have turned into, once more riding on Moore's Law.
Summary
The most common interpretation of Moore's Law is that the number of components on a chip and accordingly the computer performance doubles every two years. At the end of the 20th century, when clock frequencies stagnated at ~3 GHz, and instruction level parallelism reached the phase of diminishing returns, industry turned towards multiprocessors, and thread level parallelism. However, too much of the technological complexity of multicore architectures is exposed to the programmers, leading to a software development nightmare.
We propose a radically new concept of parallel computer architectures, using a higher level of abstraction, Instead of expressing algorithms as a sequence of instruction, we will group instructions into higher-level tasks that will be automatically managed by the architecture, much in the same way superscalar processors managed instruction level parallelism.
We envision a holistic approach where the parallel architecture is partially implemented as a software runtime, and the reminder in hardware. The hardware gains the freedom to deliver performance at the expense of additional complexity, as long as it provides the required support primitives for the runtime software to hide complexity from the programmer. Moreover, it offers a single solution that could solve most of the problems we encounter in the current approaches: handling parallelism, the memory wall, the power wall, and the reliability wall in a wide range of application domains from mobile up to supercomputers .
We will focus our research on a most efficient form of multicore architecture coupled with vector accelerators for exploiting both thread and data level parallelism.
All together, this novel approach toward future parallel architectures is the way to ensure continued performance improvements, getting us out of the technological mess that computers have turned into, once more riding on Moore's Law.
Max ERC Funding
2 356 467 €
Duration
Start date: 2013-04-01, End date: 2018-03-31
Project acronym RULE
Project Rule-Based Modelling
Researcher (PI) Vincent Nicolas Julien Danos
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary The purpose of this research programme is to contribute to the solution of a major problem in today’s systems biology: namely, the difficulty to bring mechanistic modelling to bear on full scale systems. Statistical, and experimental techniques have scaled up considerably in the last two decades. Mechanistic modelling, on the other hand, is still confined in much smaller scales. Witness the painstakingly slow scaling up of cellular signalling models, despite their central role in cell response. To address the problem, we build on a new modelling methodology, called the rule-based approach (RB), pioneered by the PI of this proposal, and hailed (in a recent Nature Methods article) as the “harbinger of an entirely new way of representing and studying cellular networks”. By exploiting the modularity of biological agents, RB breaks through the combinatorial challenge of describing and simulating signalling systems. But with the possibility of writing and running larger models, new questions come to the fore. To bring mechanistic modelling to the next level requires: innovative knowledge representation techniques to anchor modelling in the data-side of systems bi- ology; new means to tame the complexity of, and reason about, the parameter space of models; new concepts to identify meaningful observables in the highly stochastic behaviour of large and combinatorial models; and clean and structured languages to comprehend spatial aspects of the biological phenomenology. The realism accrued by working at larger scales gets one closer to the bottom-up reconstruction of behaviours at the heart of systems biology, and to an understanding of the computational architecture of complex biological networks. This research programme, firmly grounded in the mathematics of programming language semantics and formal methods, extends the RB approach so as to address all of the above needs, and deliver an integrated modelling framework where full scale mechanistic modelling is achievable.
Summary
The purpose of this research programme is to contribute to the solution of a major problem in today’s systems biology: namely, the difficulty to bring mechanistic modelling to bear on full scale systems. Statistical, and experimental techniques have scaled up considerably in the last two decades. Mechanistic modelling, on the other hand, is still confined in much smaller scales. Witness the painstakingly slow scaling up of cellular signalling models, despite their central role in cell response. To address the problem, we build on a new modelling methodology, called the rule-based approach (RB), pioneered by the PI of this proposal, and hailed (in a recent Nature Methods article) as the “harbinger of an entirely new way of representing and studying cellular networks”. By exploiting the modularity of biological agents, RB breaks through the combinatorial challenge of describing and simulating signalling systems. But with the possibility of writing and running larger models, new questions come to the fore. To bring mechanistic modelling to the next level requires: innovative knowledge representation techniques to anchor modelling in the data-side of systems bi- ology; new means to tame the complexity of, and reason about, the parameter space of models; new concepts to identify meaningful observables in the highly stochastic behaviour of large and combinatorial models; and clean and structured languages to comprehend spatial aspects of the biological phenomenology. The realism accrued by working at larger scales gets one closer to the bottom-up reconstruction of behaviours at the heart of systems biology, and to an understanding of the computational architecture of complex biological networks. This research programme, firmly grounded in the mathematics of programming language semantics and formal methods, extends the RB approach so as to address all of the above needs, and deliver an integrated modelling framework where full scale mechanistic modelling is achievable.
Max ERC Funding
2 084 316 €
Duration
Start date: 2013-02-01, End date: 2018-01-31
Project acronym SCADAPT
Project "Large-scale Adaptive Sensing, Learning and Decision Making: Theory and Applications"
Researcher (PI) Rainer Andreas Krause
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "We address one of the fundamental challenges of our time: Acting effectively while facing a deluge of data. Massive volumes of data are generated from corporate and public sources every second, in social, scientific and commercial applications. In addition, more and more low level sensor devices are becoming available and accessible, potentially to the benefit of myriads of applications. However, access to the data is limited, due to computational, bandwidth, power and other limitations. Crucially, simply gathering data is not enough: we need to make decisions based on the information we obtain. Thus, one of the key problems is: How can we obtain most decision-relevant information at minimum cost?
Most existing techniques are either heuristics with no guarantees, or do not scale to large problems. We recently showed that many information gathering problems satisfy submodularity, an intuitive diminishing returns condition. Its exploitation allowed us to develop algorithms with strong guarantees and empirical performance. However, existing algorithms are limited: they cannot cope with dynamic phenomena that change over time, are inherently centralized and thus do not scale with modern, distributed computing paradigms. Perhaps most crucially, they have been designed with the focus of gathering data, but not for making decisions based on this data.
We seek to substantially advance large-scale adaptive decision making under partial observability, by grounding it in the novel computational framework of adaptive submodular optimization. We will develop fundamentally new scalable techniques bridging statistical learning, combinatorial optimization, probabilistic inference and decision theory to overcome the limitations of existing methods. In addition to developing novel theory and algorithms, we will demonstrate the performance of our methods on challenging real world interdisciplinary problems in community sensing, information retrieval and computational sustainability."
Summary
"We address one of the fundamental challenges of our time: Acting effectively while facing a deluge of data. Massive volumes of data are generated from corporate and public sources every second, in social, scientific and commercial applications. In addition, more and more low level sensor devices are becoming available and accessible, potentially to the benefit of myriads of applications. However, access to the data is limited, due to computational, bandwidth, power and other limitations. Crucially, simply gathering data is not enough: we need to make decisions based on the information we obtain. Thus, one of the key problems is: How can we obtain most decision-relevant information at minimum cost?
Most existing techniques are either heuristics with no guarantees, or do not scale to large problems. We recently showed that many information gathering problems satisfy submodularity, an intuitive diminishing returns condition. Its exploitation allowed us to develop algorithms with strong guarantees and empirical performance. However, existing algorithms are limited: they cannot cope with dynamic phenomena that change over time, are inherently centralized and thus do not scale with modern, distributed computing paradigms. Perhaps most crucially, they have been designed with the focus of gathering data, but not for making decisions based on this data.
We seek to substantially advance large-scale adaptive decision making under partial observability, by grounding it in the novel computational framework of adaptive submodular optimization. We will develop fundamentally new scalable techniques bridging statistical learning, combinatorial optimization, probabilistic inference and decision theory to overcome the limitations of existing methods. In addition to developing novel theory and algorithms, we will demonstrate the performance of our methods on challenging real world interdisciplinary problems in community sensing, information retrieval and computational sustainability."
Max ERC Funding
1 499 900 €
Duration
Start date: 2012-11-01, End date: 2017-10-31
Project acronym SEEVS
Project Self-Enforcing E-Voting System: Trustworthy Election in Presence of Corrupt Authorities
Researcher (PI) Feng Hao
Host Institution (HI) UNIVERSITY OF NEWCASTLE UPON TYNE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "This project aims to develop a new generation of e-voting called the “self-enforcing e-voting system”. The new system does not depend on any trusted authorities, which is different from all currently existing or proposed e-voting schemes. This has several compelling advantages. First, voting security will be significantly improved. Second, the democratic process will be enforced as a whole. Third, the election management will be dramatically simplified. Fourth, the tallying process will become much faster.
The idea of a “self-enforcing” e-voting system has so far received little attention. Although several researchers have attempted to build such a system in the past decade, none were successful due to inefficiencies in computation, bandwidth and the number of rounds. My preliminary investigation indicates that a ""self-enforcing e-voting system"" is not only practically feasible, but has the potential to be the future of e-voting technology. I have identify several major research problems in the field, which need to be addressed urgently before a self-enforcing e-voting system can finally become viable for practical use. The problems span three disciplines: security, dependability and usability.
My main hypothesis is: “a secure, dependable and usable self-enforcing e-voting system will trigger a paradigm shift in voting technology”. I believe e-voting has great potential that has yet to be exploited, and this project is to develop that potential to the full. The work program involves six work packages: 1) to develop supportive security primitives to lay foundation for future e-voting; 2) to research the impact of “self-enforcing” requirement on dependability; 3) to develop assistive technologies to improve the usability in voting; 4) to design system architectures suitable for different election scenarios; 5) to build open source prototypes; 6) to conduct real-world trial elections and evaluate the full technical, social, economic and political impacts."
Summary
"This project aims to develop a new generation of e-voting called the “self-enforcing e-voting system”. The new system does not depend on any trusted authorities, which is different from all currently existing or proposed e-voting schemes. This has several compelling advantages. First, voting security will be significantly improved. Second, the democratic process will be enforced as a whole. Third, the election management will be dramatically simplified. Fourth, the tallying process will become much faster.
The idea of a “self-enforcing” e-voting system has so far received little attention. Although several researchers have attempted to build such a system in the past decade, none were successful due to inefficiencies in computation, bandwidth and the number of rounds. My preliminary investigation indicates that a ""self-enforcing e-voting system"" is not only practically feasible, but has the potential to be the future of e-voting technology. I have identify several major research problems in the field, which need to be addressed urgently before a self-enforcing e-voting system can finally become viable for practical use. The problems span three disciplines: security, dependability and usability.
My main hypothesis is: “a secure, dependable and usable self-enforcing e-voting system will trigger a paradigm shift in voting technology”. I believe e-voting has great potential that has yet to be exploited, and this project is to develop that potential to the full. The work program involves six work packages: 1) to develop supportive security primitives to lay foundation for future e-voting; 2) to research the impact of “self-enforcing” requirement on dependability; 3) to develop assistive technologies to improve the usability in voting; 4) to design system architectures suitable for different election scenarios; 5) to build open source prototypes; 6) to conduct real-world trial elections and evaluate the full technical, social, economic and political impacts."
Max ERC Funding
1 484 713 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym ShapeForge
Project ShapeForge: By-Example Synthesis for Fabrication
Researcher (PI) Sylvain Lefebvre
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Despite the advances in fabrication technologies such as 3D printing, we still lack the software allowing for anyone to easily manipulate and create useful objects. Not many people possess the required skills and time to create elegant designs that conform to precise technical specifications.
'By--example' shape synthesis methods are promising to address this problem: New shapes are automatically synthesized by assembling parts cutout of examples. The underlying assumption is that if parts are stitched along similar areas, the result will be similar in terms of its low--level representation: Any small spatial neighborhood in the output matches a neighborhood in the input. However, these approaches offer little control over the global organization of the synthesized shapes, which is randomized.
The ShapeForge challenge is to automatically produce new objects visually similar to a set of examples, while ensuring that the generated objects can enforce a specific purpose, such as supporting weight distributed in space, affording for seating space or allowing for light to go through. This properties are crucial for someone designing furniture, lamps, containers, stairs and many of the common objects surrounding us. The originality of my approach is to cast a new view on the problem of 'by--example' shape synthesis, formulating it as the joint optimization of 'by--example' objectives, semantic descriptions of the content, as well as structural and fabrication objectives. Throughout the project, we will consider the full creation pipeline, from modeling to the actual fabrication of objects on a 3D printer. We will test our results on printed parts, verifying that they can be fabricated and exhibit the requested structural properties in terms of stability and resistance.
Summary
Despite the advances in fabrication technologies such as 3D printing, we still lack the software allowing for anyone to easily manipulate and create useful objects. Not many people possess the required skills and time to create elegant designs that conform to precise technical specifications.
'By--example' shape synthesis methods are promising to address this problem: New shapes are automatically synthesized by assembling parts cutout of examples. The underlying assumption is that if parts are stitched along similar areas, the result will be similar in terms of its low--level representation: Any small spatial neighborhood in the output matches a neighborhood in the input. However, these approaches offer little control over the global organization of the synthesized shapes, which is randomized.
The ShapeForge challenge is to automatically produce new objects visually similar to a set of examples, while ensuring that the generated objects can enforce a specific purpose, such as supporting weight distributed in space, affording for seating space or allowing for light to go through. This properties are crucial for someone designing furniture, lamps, containers, stairs and many of the common objects surrounding us. The originality of my approach is to cast a new view on the problem of 'by--example' shape synthesis, formulating it as the joint optimization of 'by--example' objectives, semantic descriptions of the content, as well as structural and fabrication objectives. Throughout the project, we will consider the full creation pipeline, from modeling to the actual fabrication of objects on a 3D printer. We will test our results on printed parts, verifying that they can be fabricated and exhibit the requested structural properties in terms of stability and resistance.
Max ERC Funding
1 301 832 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym SPADE
Project Sophisticated Program Analysis, Declaratively
Researcher (PI) Ioannis Smaragdakis
Host Institution (HI) ETHNIKO KAI KAPODISTRIAKO PANEPISTIMIO ATHINON
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Static program analysis is a fundamental computing challenge. We have recently demonstrated significant advantages from expressing analyses for Java declaratively, in the Datalog language. This means that the algorithm is in a form that resembles a pure logical specification, rather than a step-by-step definition of the execution. The declarative specification does not merely cover the main logic of the algorithm, but its entire implementation, including the handling of complex semantic features (such as native methods, reflection, threads) of the Java language. Surprisingly, the declarative specification can be made to execute up to an order of magnitude faster than the dominant pre-existing implementations of the same algorithms. Armed with this past experience, the SPADE project aims to develop a next-generation approach to the design and declarative implementation of static program analyses. This will include a) a substantially more flexible notion of context-sensitive analysis, which allows context to vary according to introspective observations; b) a flow-sensitive analysis framework that can be used as the basis for dataflow analysis; c) an approach to producing parallel implementations of analyses by exploiting the parallelism inherent in the declarative specification; d) an exploration of adapting analysis logic to multiple languages and paradigms, including C (using the LLVM infrastructure), functional languages (e.g., Scheme), and dynamic languages (notably, Javascript); e) client analyses algorithms (e.g., may-happen-in-parallel, bug finding analyses such as race and atomicity-violation detectors, etc.) expressed modularly over the underlying substrate of points-to analysis.
The work will have applications to multiple languages and a variety of analyses. Concretely, our precise and scalable analysis algorithms will enhance optimizing compilers, program analyzers for error detection, and program understanding tools.
Summary
Static program analysis is a fundamental computing challenge. We have recently demonstrated significant advantages from expressing analyses for Java declaratively, in the Datalog language. This means that the algorithm is in a form that resembles a pure logical specification, rather than a step-by-step definition of the execution. The declarative specification does not merely cover the main logic of the algorithm, but its entire implementation, including the handling of complex semantic features (such as native methods, reflection, threads) of the Java language. Surprisingly, the declarative specification can be made to execute up to an order of magnitude faster than the dominant pre-existing implementations of the same algorithms. Armed with this past experience, the SPADE project aims to develop a next-generation approach to the design and declarative implementation of static program analyses. This will include a) a substantially more flexible notion of context-sensitive analysis, which allows context to vary according to introspective observations; b) a flow-sensitive analysis framework that can be used as the basis for dataflow analysis; c) an approach to producing parallel implementations of analyses by exploiting the parallelism inherent in the declarative specification; d) an exploration of adapting analysis logic to multiple languages and paradigms, including C (using the LLVM infrastructure), functional languages (e.g., Scheme), and dynamic languages (notably, Javascript); e) client analyses algorithms (e.g., may-happen-in-parallel, bug finding analyses such as race and atomicity-violation detectors, etc.) expressed modularly over the underlying substrate of points-to analysis.
The work will have applications to multiple languages and a variety of analyses. Concretely, our precise and scalable analysis algorithms will enhance optimizing compilers, program analyzers for error detection, and program understanding tools.
Max ERC Funding
1 042 616 €
Duration
Start date: 2013-01-01, End date: 2019-03-31
Project acronym SPEAR
Project Specialisable, Programmable, Efficient and Robust Microprocessors
Researcher (PI) Robert Mullins
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary The development of faster, cheaper and smaller transistors has been the driving force behind the exponential growth in computing power over the past 50 years. While our ability to fabricate better transistors has not yet ceased, continuing to translate these advances into better system-level performance is now a major challenge. This proposal seeks to research a new approach to building programmable digital systems, one that can offer the efficiency, robustness and flexibility required as we approach the end of the CMOS era and start to introduce new post-CMOS technologies. The ideas are centred upon a novel network-centric multiprocessor architecture, with contributions planned at every level from the circuit to the language level.
Summary
The development of faster, cheaper and smaller transistors has been the driving force behind the exponential growth in computing power over the past 50 years. While our ability to fabricate better transistors has not yet ceased, continuing to translate these advances into better system-level performance is now a major challenge. This proposal seeks to research a new approach to building programmable digital systems, one that can offer the efficiency, robustness and flexibility required as we approach the end of the CMOS era and start to introduce new post-CMOS technologies. The ideas are centred upon a novel network-centric multiprocessor architecture, with contributions planned at every level from the circuit to the language level.
Max ERC Funding
1 271 216 €
Duration
Start date: 2012-09-01, End date: 2018-05-31
Project acronym SPEED
Project Single Pore Engineering for Membrane Development
Researcher (PI) Ian Metcalfe
Host Institution (HI) UNIVERSITY OF NEWCASTLE UPON TYNE
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary Mankind needs to innovate to deliver more efficient, environmentally-friendly and increasingly intensified processes. The development of highly selective, high temperature, inorganic membranes is critical for the introduction of the novel membrane processes that will promote the transition to a low carbon economy and result in cleaner, more efficient and safer chemical conversions. However, high temperature membranes are difficult to study because of problems associated with sealing and determining the relatively low fluxes that are present in most laboratory systems (fluxes are conventionally determined by gas analysis of the permeate stream). Characterisation is difficult because of complex membrane microstructures.
I will avoid these problems by adopting an entirely new approach to membrane materials selection and kinetic testing through a pioneering study of permeation in single pores of model membranes. Firstly, model single pore systems will be designed and fabricated; appropriate micro-analytical techniques to follow permeation will be developed. Secondly, these model systems will be used to screen novel combinations of materials for hybrid membranes and to determine kinetics with a degree of control not previously available in this field. Thirdly, I will use our improved understanding of membrane kinetics to guide real membrane design and fabrication. Real membrane performance will be compared to model predictions and I will investigate how the new membranes can impact on process design.
If successful, an entirely new approach to membrane science will be developed and demonstrated. New membranes will be developed facilitating the adoption of new processes addressing timely challenges such as the production of high purity hydrogen from low-grade reducing gases, carbon dioxide capture and the removal of oxides of nitrogen from oxygen-containing exhaust streams.
Summary
Mankind needs to innovate to deliver more efficient, environmentally-friendly and increasingly intensified processes. The development of highly selective, high temperature, inorganic membranes is critical for the introduction of the novel membrane processes that will promote the transition to a low carbon economy and result in cleaner, more efficient and safer chemical conversions. However, high temperature membranes are difficult to study because of problems associated with sealing and determining the relatively low fluxes that are present in most laboratory systems (fluxes are conventionally determined by gas analysis of the permeate stream). Characterisation is difficult because of complex membrane microstructures.
I will avoid these problems by adopting an entirely new approach to membrane materials selection and kinetic testing through a pioneering study of permeation in single pores of model membranes. Firstly, model single pore systems will be designed and fabricated; appropriate micro-analytical techniques to follow permeation will be developed. Secondly, these model systems will be used to screen novel combinations of materials for hybrid membranes and to determine kinetics with a degree of control not previously available in this field. Thirdly, I will use our improved understanding of membrane kinetics to guide real membrane design and fabrication. Real membrane performance will be compared to model predictions and I will investigate how the new membranes can impact on process design.
If successful, an entirely new approach to membrane science will be developed and demonstrated. New membranes will be developed facilitating the adoption of new processes addressing timely challenges such as the production of high purity hydrogen from low-grade reducing gases, carbon dioxide capture and the removal of oxides of nitrogen from oxygen-containing exhaust streams.
Max ERC Funding
2 080 000 €
Duration
Start date: 2013-02-01, End date: 2019-01-31
Project acronym SPINAM
Project Electrospinning: a method to elaborate membrane-electrode assemblies for fuel cells
Researcher (PI) Sara Cavaliere
Host Institution (HI) UNIVERSITE DE MONTPELLIER
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary "This project leads to the development of novel MEAs comprising components elaborated by the electrospinning technique. Proton exchange membranes will be elaborated from electrospun ionomer fibres and characterised. In the first stages of the work, we will use commercial perfluorosulfonic acid polymers, but later we will extend the study to specific partially fluorinated ionomers developed within th project, as well as to sulfonated polyaromatic ionomers. Fuel cell electrodes will be prepared using conducting fibres prepared by electrospinning as supports. Initially we will focus on carbon nanofibres, and then on modified carbon support materials (heteroatom functionalisation, oriented carbons) and finally on metal oxides and carbides. The resultant nanofibres will serve as support for the deposition of metal catalyst particles (Pt, Pt/Co, Pt/Ru). Conventional impregnation routes and also a novel “one pot” method will be used.
Detailed (structural, morphological, electrical, electrochemical) characterisation of the electrodes will be carried out in collaboration between partners. The membranes and electrodes developed will be assembled into MEAs using CCM (catalyst coated membrane) and GDE (gas diffusion electrode) approaches and also an original ""membrane coated GDE"" method based on electrospinning. Finally the obtained MEAs will be characterised in situ in an operating fuel cell fed with hydrogen or methanol and the results compared with those of conventional MEAs."
Summary
"This project leads to the development of novel MEAs comprising components elaborated by the electrospinning technique. Proton exchange membranes will be elaborated from electrospun ionomer fibres and characterised. In the first stages of the work, we will use commercial perfluorosulfonic acid polymers, but later we will extend the study to specific partially fluorinated ionomers developed within th project, as well as to sulfonated polyaromatic ionomers. Fuel cell electrodes will be prepared using conducting fibres prepared by electrospinning as supports. Initially we will focus on carbon nanofibres, and then on modified carbon support materials (heteroatom functionalisation, oriented carbons) and finally on metal oxides and carbides. The resultant nanofibres will serve as support for the deposition of metal catalyst particles (Pt, Pt/Co, Pt/Ru). Conventional impregnation routes and also a novel “one pot” method will be used.
Detailed (structural, morphological, electrical, electrochemical) characterisation of the electrodes will be carried out in collaboration between partners. The membranes and electrodes developed will be assembled into MEAs using CCM (catalyst coated membrane) and GDE (gas diffusion electrode) approaches and also an original ""membrane coated GDE"" method based on electrospinning. Finally the obtained MEAs will be characterised in situ in an operating fuel cell fed with hydrogen or methanol and the results compared with those of conventional MEAs."
Max ERC Funding
1 352 774 €
Duration
Start date: 2013-01-01, End date: 2018-06-30
Project acronym STATOR
Project STATic analysis with ORiginal methods
Researcher (PI) David Pascal Monniaux
Host Institution (HI) UNIVERSITE GRENOBLE ALPES
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Since the beginning of computing, software has had bugs. If a word processor crashes, consequences are limited. If a networked application has security bugs (e.g. buffer overflows), important information (e.g. financial or medical) can leak. More importantly, today's planes are flown by computers, voting machines as well medical devices such as infusion pumps are computerized, and surgeries are performed by robots. Clearly, it is in the best interest of society that such software is bug-free.
BUGS ARE NOT A FATALITY!
Traditionally, software is tested, i.e. run on a limited number of test cases. Yet, testing cannot prove the absence of bugs in untested configurations. Formal methods, producing mathematical proofs of correctness, have long been proposed as a means to give strong assurance on software. They unfortunately had a (not entirely undeserved) reputation for not scaling up to real software.
Faster, automated static analysis methods were however produced in the 2000s, which could cope with some specific classes of applications: predicate abstraction, based on decision procedures (e.g. Microsoft's device driver verifier) and abstract interpretation (e.g. Polyspace and Astrée, for automotive, aerospace etc.). Yet such systems are still unusable on more common programs: they reject some program constructs, they give too many false alarms (about nonexistent problems) and/or they take too much time and memory.
In the recent years, I and others proposed techniques combining decision procedures and classical abstract interpretation, so as to decrease false alarms while keeping costs reasonable. These techniques are still in their infancy. The purpose how STATOR is to develop new combination techniques, so as to break the precision/efficiency barrier.
Since the only way to see if a technique really works is to implement and try it, STATOR will produce a practical static analysis tool and experiment it on real programs.
Summary
Since the beginning of computing, software has had bugs. If a word processor crashes, consequences are limited. If a networked application has security bugs (e.g. buffer overflows), important information (e.g. financial or medical) can leak. More importantly, today's planes are flown by computers, voting machines as well medical devices such as infusion pumps are computerized, and surgeries are performed by robots. Clearly, it is in the best interest of society that such software is bug-free.
BUGS ARE NOT A FATALITY!
Traditionally, software is tested, i.e. run on a limited number of test cases. Yet, testing cannot prove the absence of bugs in untested configurations. Formal methods, producing mathematical proofs of correctness, have long been proposed as a means to give strong assurance on software. They unfortunately had a (not entirely undeserved) reputation for not scaling up to real software.
Faster, automated static analysis methods were however produced in the 2000s, which could cope with some specific classes of applications: predicate abstraction, based on decision procedures (e.g. Microsoft's device driver verifier) and abstract interpretation (e.g. Polyspace and Astrée, for automotive, aerospace etc.). Yet such systems are still unusable on more common programs: they reject some program constructs, they give too many false alarms (about nonexistent problems) and/or they take too much time and memory.
In the recent years, I and others proposed techniques combining decision procedures and classical abstract interpretation, so as to decrease false alarms while keeping costs reasonable. These techniques are still in their infancy. The purpose how STATOR is to develop new combination techniques, so as to break the precision/efficiency barrier.
Since the only way to see if a technique really works is to implement and try it, STATOR will produce a practical static analysis tool and experiment it on real programs.
Max ERC Funding
1 472 495 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym SUBLINEAR
Project Sublinear algorithms for the analysis of very large graphs
Researcher (PI) Christian Sohler
Host Institution (HI) TECHNISCHE UNIVERSITAT DORTMUND
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Large graphs appear in many application areas. Typical examples are the webgraph, the internet graph, friendship graphs of social networks like facebook or Google+, citation graphs, collaboration graphs, and transportation networks.
The structure of these graphs contains valuable information but their size makes them very hard to analyze. We need special algorithm that analyze the graph structure via random sampling.
The main objective of the proposed project is to advance our understanding of the foundations of sampling processes for the analysis of the structure of large graphs. We will use the approach of Property Testing, a theoretical framework to analyze such sampling algorithms.
Summary
Large graphs appear in many application areas. Typical examples are the webgraph, the internet graph, friendship graphs of social networks like facebook or Google+, citation graphs, collaboration graphs, and transportation networks.
The structure of these graphs contains valuable information but their size makes them very hard to analyze. We need special algorithm that analyze the graph structure via random sampling.
The main objective of the proposed project is to advance our understanding of the foundations of sampling processes for the analysis of the structure of large graphs. We will use the approach of Property Testing, a theoretical framework to analyze such sampling algorithms.
Max ERC Funding
1 475 306 €
Duration
Start date: 2012-12-01, End date: 2018-11-30
Project acronym SUNFUELS
Project SOLAR THERMOCHEMICAL PRODUCTION OF FUELS
Researcher (PI) Aldo Ernesto Steinfeld
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary "The research is aimed at the efficient production of solar fuels from H2O and CO2. Solar thermochemical approaches using concentrating solar energy inherently operate at high temperatures and utilize the entire solar spectrum, and as such provide thermodynamic favorable paths to efficient solar fuel production. The targeted solar fuel is syngas: a mixture of mainly H2 and CO that can be further processed to liquid hydrocarbon fuels (e.g. diesel, kerosene), which offer high energy densities and are most convenient for the transportation sector without changes in the current global infrastructure. The strategy for the efficient production of solar syngas from H2O and CO2 involves research on a 2-step thermochemical redox cycle, encompassing: 1st step) the solar-driven endothermic reduction of a metal oxide; and 2nd step) the non-solar exothermic oxidation of the reduced metal oxide with H2O/CO2, yielding syngas together with the initial metal oxide. Two redox pairs have been identified as most promising: the volatile ZnO/Zn and non-volatile CeO2/CeO2-δ. Novel materials, structures, and solar reactor concepts will be developed for enhanced heat and mass transport, fast reaction rates, and high specific yields of fuel generation. Thermodynamic and kinetic analyses of the pertinent redox reactions will enable screening dopants. Solar reactor modeling will incorporate fundamental transport phenomena coupled to reaction kinetics by applying advanced numerical methods (e.g. Monte Carlo coupled to CFD at the pore scale). Solar reactor prototypes for 5 kW solar radiative power input will experimentally demonstrate the efficient production of solar syngas and their suitability for large-scale industrial implementation. The proposed research contributes to the development of technically viable and cost effective technologies for sustainable transportation fuels, and thus addresses one of the most pressing challenges that modern society is facing at the global level."
Summary
"The research is aimed at the efficient production of solar fuels from H2O and CO2. Solar thermochemical approaches using concentrating solar energy inherently operate at high temperatures and utilize the entire solar spectrum, and as such provide thermodynamic favorable paths to efficient solar fuel production. The targeted solar fuel is syngas: a mixture of mainly H2 and CO that can be further processed to liquid hydrocarbon fuels (e.g. diesel, kerosene), which offer high energy densities and are most convenient for the transportation sector without changes in the current global infrastructure. The strategy for the efficient production of solar syngas from H2O and CO2 involves research on a 2-step thermochemical redox cycle, encompassing: 1st step) the solar-driven endothermic reduction of a metal oxide; and 2nd step) the non-solar exothermic oxidation of the reduced metal oxide with H2O/CO2, yielding syngas together with the initial metal oxide. Two redox pairs have been identified as most promising: the volatile ZnO/Zn and non-volatile CeO2/CeO2-δ. Novel materials, structures, and solar reactor concepts will be developed for enhanced heat and mass transport, fast reaction rates, and high specific yields of fuel generation. Thermodynamic and kinetic analyses of the pertinent redox reactions will enable screening dopants. Solar reactor modeling will incorporate fundamental transport phenomena coupled to reaction kinetics by applying advanced numerical methods (e.g. Monte Carlo coupled to CFD at the pore scale). Solar reactor prototypes for 5 kW solar radiative power input will experimentally demonstrate the efficient production of solar syngas and their suitability for large-scale industrial implementation. The proposed research contributes to the development of technically viable and cost effective technologies for sustainable transportation fuels, and thus addresses one of the most pressing challenges that modern society is facing at the global level."
Max ERC Funding
2 187 650 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym SUPREL
Project "Scaling Up Reinforcement Learning: Structure Learning, Skill Acquisition, and Reward Shaping"
Researcher (PI) Shie Mannor
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Learning how to act optimally in high-dimensional stochastic dynamic environments is a fundamental problem in many areas of engineering and computer science. The basic setup is that of an agent who interacts with an environment trying to maximize some long term payoff while having access to observations of the state of the environment. A standard approach to solving this problem is the Reinforcement Learning (RL) paradigm in which an agent is trying to improve its policy by interacting with the environment or, more generally, by using different sources of information such as traces from an expert and interacting with a simulator. In spite of several success stories of the RL paradigm, a unified methodology for scaling-up RL has not emerged to date. The goal of this research proposal is to create a methodology for learning and acting in high-dimensional stochastic dynamic environments that would scale up to real-world applications well and that will be useful across domains and engineering disciplines.
We focus on three key aspects of learning and optimization in high dimensional stochastic dynamic environments that are interrelated and essential to scaling up RL. First, we consider the problem of structure learning. This is the problem of how to identify the key features and underlying structures in the environment that are most useful for optimization and learning. Second, we consider the problem of learning, defining, and optimizing skills. Skills are sub-policies whose goal is more focused than solving the whole optimization problem and can hence be more easily learned and optimized. Third, we consider changing the natural reward of the system to obtain desirable properties of the solution such as robustness, adversity to risk and smoothness of the control policy. In order to validate our approach we study two challenging real-world domains: a jet fighter flight simulator and a smart-grid short term control problem."
Summary
"Learning how to act optimally in high-dimensional stochastic dynamic environments is a fundamental problem in many areas of engineering and computer science. The basic setup is that of an agent who interacts with an environment trying to maximize some long term payoff while having access to observations of the state of the environment. A standard approach to solving this problem is the Reinforcement Learning (RL) paradigm in which an agent is trying to improve its policy by interacting with the environment or, more generally, by using different sources of information such as traces from an expert and interacting with a simulator. In spite of several success stories of the RL paradigm, a unified methodology for scaling-up RL has not emerged to date. The goal of this research proposal is to create a methodology for learning and acting in high-dimensional stochastic dynamic environments that would scale up to real-world applications well and that will be useful across domains and engineering disciplines.
We focus on three key aspects of learning and optimization in high dimensional stochastic dynamic environments that are interrelated and essential to scaling up RL. First, we consider the problem of structure learning. This is the problem of how to identify the key features and underlying structures in the environment that are most useful for optimization and learning. Second, we consider the problem of learning, defining, and optimizing skills. Skills are sub-policies whose goal is more focused than solving the whole optimization problem and can hence be more easily learned and optimized. Third, we consider changing the natural reward of the system to obtain desirable properties of the solution such as robustness, adversity to risk and smoothness of the control policy. In order to validate our approach we study two challenging real-world domains: a jet fighter flight simulator and a smart-grid short term control problem."
Max ERC Funding
1 500 000 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym SURFCOMP
Project Comparing and Analyzing Collections of Surfaces
Researcher (PI) Yaron Lipman
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary The proposed research program intends to cover all aspects of the problem of learning and analyzing collections of surfaces and apply the developed methods and algorithms to a wide range of scientific data.
The proposal has two parts:
In the first part of the proposal, we concentrate on developing the most basic operators comparing automatically pairs of surfaces. Although this problem has received
a lot of attention in recent years,
and significant progress has been made, there is still a great need for algorithms that are both efficient/tractable and come with guarantees
of convergence or accuracy. The main difficulty in most approaches so far
is that they work in a huge and non-linear search space to compare surfaces; most algorithms resort to gradient descent from an initial guess, risking to find only local optimal solution.
We offer a few research directions to tackle this problem based on the idea of identifying EFFICIENT search spaces that APPROXIMATE the desired optimal correspondence.
In the second part of the proposal we propose to make use of the methods developed in the first part to perform global analysis of, or learn, collections of surfaces. We
put special emphasis on ``real-world'' applications and intend to validate our algorithm on a significant collection, including data-sets such as biological anatomic data-sets and computer graphics' benchmark collections of surfaces. We propose to formulate and construct geometric structures on these collections and investigate their domain specific implications.
Summary
The proposed research program intends to cover all aspects of the problem of learning and analyzing collections of surfaces and apply the developed methods and algorithms to a wide range of scientific data.
The proposal has two parts:
In the first part of the proposal, we concentrate on developing the most basic operators comparing automatically pairs of surfaces. Although this problem has received
a lot of attention in recent years,
and significant progress has been made, there is still a great need for algorithms that are both efficient/tractable and come with guarantees
of convergence or accuracy. The main difficulty in most approaches so far
is that they work in a huge and non-linear search space to compare surfaces; most algorithms resort to gradient descent from an initial guess, risking to find only local optimal solution.
We offer a few research directions to tackle this problem based on the idea of identifying EFFICIENT search spaces that APPROXIMATE the desired optimal correspondence.
In the second part of the proposal we propose to make use of the methods developed in the first part to perform global analysis of, or learn, collections of surfaces. We
put special emphasis on ``real-world'' applications and intend to validate our algorithm on a significant collection, including data-sets such as biological anatomic data-sets and computer graphics' benchmark collections of surfaces. We propose to formulate and construct geometric structures on these collections and investigate their domain specific implications.
Max ERC Funding
1 113 744 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym Tailor Graphene
Project Tailoring Graphene to Withstand Large Deformations
Researcher (PI) Constantine Galiotis
Host Institution (HI) FOUNDATION FOR RESEARCH AND TECHNOLOGY HELLAS
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary This proposal aims via a comprehensive and interdisciplinary programme of research to determine the full response of monolayer (atomic thickness) graphene to extreme axial tensional deformation up to failure and to measure directly its tensile strength, stiffness, strain-to-failure and, most importantly, the effect of orthogonal buckling to its overall tensile properties. Already our recent results have shown that graphene buckling of any form can be suppressed by embedding the flakes into polymer matrices. We have indeed quantified this effect for any flake geometry and have produced master curves relating geometrical aspects to compression strain-to-failure. In the proposed work, we will make good use of this finding by altering the geometry of the flakes and thus design graphene strips (micro-ribbons) of specific dimensions which when embedded to polymer matrices can be stretched to large deformation and even failure without simultaneous buckling in the other direction. This is indeed the only route possible for the exploitation of the potential of graphene as an efficient reinforcement in composites. Since orthogonal buckling during stretching is expected to alter- among other things- the Dirac spectrum and consequently the electronic properties of graphene, we intend to use the technique of Raman spectroscopy to produce stress/ strain maps in two dimensions in order to quantify fully this effect from the mechanical standpoint. Finally, another option for ironing out the wrinkles is to apply a simultaneous thermal field during tensile loading. This will give rise to a biaxial stretching of graphene which presents another interesting field of study particularly for already envisaged applications of graphene in flexible displays and coatings.
Summary
This proposal aims via a comprehensive and interdisciplinary programme of research to determine the full response of monolayer (atomic thickness) graphene to extreme axial tensional deformation up to failure and to measure directly its tensile strength, stiffness, strain-to-failure and, most importantly, the effect of orthogonal buckling to its overall tensile properties. Already our recent results have shown that graphene buckling of any form can be suppressed by embedding the flakes into polymer matrices. We have indeed quantified this effect for any flake geometry and have produced master curves relating geometrical aspects to compression strain-to-failure. In the proposed work, we will make good use of this finding by altering the geometry of the flakes and thus design graphene strips (micro-ribbons) of specific dimensions which when embedded to polymer matrices can be stretched to large deformation and even failure without simultaneous buckling in the other direction. This is indeed the only route possible for the exploitation of the potential of graphene as an efficient reinforcement in composites. Since orthogonal buckling during stretching is expected to alter- among other things- the Dirac spectrum and consequently the electronic properties of graphene, we intend to use the technique of Raman spectroscopy to produce stress/ strain maps in two dimensions in order to quantify fully this effect from the mechanical standpoint. Finally, another option for ironing out the wrinkles is to apply a simultaneous thermal field during tensile loading. This will give rise to a biaxial stretching of graphene which presents another interesting field of study particularly for already envisaged applications of graphene in flexible displays and coatings.
Max ERC Funding
2 025 600 €
Duration
Start date: 2013-06-01, End date: 2018-05-31
Project acronym TRAM
Project Transport at the microscopic interface
Researcher (PI) Rob Gerhardus Hendrikus Lammertink
Host Institution (HI) UNIVERSITEIT TWENTE
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary The research objective is to study and model membrane separation processes at the microscopic scale. I propose to exploit microfluidic platforms that contain a certain membrane separation challenge to be studied; i.e. biofouling, overlimiting current, and concentration polarization. Each challenge will be the topic for an individual PhD student.
Summary
The research objective is to study and model membrane separation processes at the microscopic scale. I propose to exploit microfluidic platforms that contain a certain membrane separation challenge to be studied; i.e. biofouling, overlimiting current, and concentration polarization. Each challenge will be the topic for an individual PhD student.
Max ERC Funding
1 500 000 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym TRAMAN21
Project Traffic Management for the 21st Century
Researcher (PI) Markos Papageorgiou
Host Institution (HI) POLYTECHNEIO KRITIS
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary Traffic congestion on motorways is a serious threat for the economic and social life of modern societies and for the environment, which calls for drastic and radical solutions. Conventional traffic management faces limitations. During the last decade, there has been an enormous effort to develop a variety of Vehicle Automation and Communication Systems (VACS) that are expected to revolutionise the features and capabilities of individual vehicles. VACS are typically developed to benefit the individual vehicle, without a clear view for the implications, advantages and disadvantages they may have for the accordingly modified traffic characteristics. Thus, the introduction of VACS brings along the necessity and growing opportunities for adapted or utterly new traffic management.
It is the main objective of TRAMAN21 to develop the foundations and first steps that will pave the way towards a new era of motorway traffic management research and practice, which is indispensable for exploiting the evolving VACS deployment. TRAMAN21 assesses the relevance of VACS for improved traffic flow and develops specific options for a sensible upgrade of the traffic conditions, particularly at the network’s weak points, i.e. at bottlenecks and incident locations. The proposed work comprises the development of new traffic flow modelling and control approaches on the basis of appropriate methods from many-particle Physics, Automatic Control and Optimisation. A field trial is included, aiming at a preliminary testing and demonstration of the developed concepts.
TRAMAN21 risk stems from the uncertainty in the VACS evolution, which is a challenge for the required modelling and control developments. But, if successful, TRAMAN21 will contribute to a substantial reduction of the estimated annual European traffic congestion cost of 120 billion € and related environmental pollution and will trigger further innovative developments and a new era of traffic flow modelling and control research.
Summary
Traffic congestion on motorways is a serious threat for the economic and social life of modern societies and for the environment, which calls for drastic and radical solutions. Conventional traffic management faces limitations. During the last decade, there has been an enormous effort to develop a variety of Vehicle Automation and Communication Systems (VACS) that are expected to revolutionise the features and capabilities of individual vehicles. VACS are typically developed to benefit the individual vehicle, without a clear view for the implications, advantages and disadvantages they may have for the accordingly modified traffic characteristics. Thus, the introduction of VACS brings along the necessity and growing opportunities for adapted or utterly new traffic management.
It is the main objective of TRAMAN21 to develop the foundations and first steps that will pave the way towards a new era of motorway traffic management research and practice, which is indispensable for exploiting the evolving VACS deployment. TRAMAN21 assesses the relevance of VACS for improved traffic flow and develops specific options for a sensible upgrade of the traffic conditions, particularly at the network’s weak points, i.e. at bottlenecks and incident locations. The proposed work comprises the development of new traffic flow modelling and control approaches on the basis of appropriate methods from many-particle Physics, Automatic Control and Optimisation. A field trial is included, aiming at a preliminary testing and demonstration of the developed concepts.
TRAMAN21 risk stems from the uncertainty in the VACS evolution, which is a challenge for the required modelling and control developments. But, if successful, TRAMAN21 will contribute to a substantial reduction of the estimated annual European traffic congestion cost of 120 billion € and related environmental pollution and will trigger further innovative developments and a new era of traffic flow modelling and control research.
Max ERC Funding
1 496 880 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym UNIQUE
Project Non-equilibrium Information and Capacity Envelopes: Towards a Unified Information and Queueing Theory
Researcher (PI) Markus Fidler
Host Institution (HI) GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVER
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Dating back sixty years to the seminal works by Shannon, information theory is a cornerstone of communications. Amongst others, it's significance stems from the decoupling of data compression and transmission as accomplished by the celebrated source and channel coding theorems. The success has, however, not been brought forward to communications networks. Yet, particular advances, such as in cross-layer optimization and network coding, show the tremendous potential that may be accessible by a network information theory.
A major challenge for establishing a network information theory is due to the properties of network data traffic that is highly variable (sporadic) and delay-sensitive. In contrast, information theory mostly neglects the dynamics of information and capacity and focuses on averages, respectively, asymptotic limits. Typically, these limits can be achieved with infinitesimally small probability of error assuming, however, arbitrarily long codewords (coding delays). Queueing theory, on the other hand, is employed to analyze network delays using (stochastic) models of a network's traffic arrivals and service. To date a tight link between these models and the information theoretic concepts of entropy and channel capacity is missing.
The goal of this project is to contribute elements of a network information theory that bridge the gap towards communications (queueing) networks. To this end, we use concepts from information theory to explore the dynamics of sources and channels. Our approach envisions envelope functions of information and capacity that have the ability to model the impact of the timescale, and that converge in the limit to the entropy and the channel capacity, respectively. The model will enable queueing theoretical investigations, permitting us to make significant contributions to the field of network information theory, and to provide substantial, new insights and applications from a holistic analysis of communications networks.
Summary
Dating back sixty years to the seminal works by Shannon, information theory is a cornerstone of communications. Amongst others, it's significance stems from the decoupling of data compression and transmission as accomplished by the celebrated source and channel coding theorems. The success has, however, not been brought forward to communications networks. Yet, particular advances, such as in cross-layer optimization and network coding, show the tremendous potential that may be accessible by a network information theory.
A major challenge for establishing a network information theory is due to the properties of network data traffic that is highly variable (sporadic) and delay-sensitive. In contrast, information theory mostly neglects the dynamics of information and capacity and focuses on averages, respectively, asymptotic limits. Typically, these limits can be achieved with infinitesimally small probability of error assuming, however, arbitrarily long codewords (coding delays). Queueing theory, on the other hand, is employed to analyze network delays using (stochastic) models of a network's traffic arrivals and service. To date a tight link between these models and the information theoretic concepts of entropy and channel capacity is missing.
The goal of this project is to contribute elements of a network information theory that bridge the gap towards communications (queueing) networks. To this end, we use concepts from information theory to explore the dynamics of sources and channels. Our approach envisions envelope functions of information and capacity that have the ability to model the impact of the timescale, and that converge in the limit to the entropy and the channel capacity, respectively. The model will enable queueing theoretical investigations, permitting us to make significant contributions to the field of network information theory, and to provide substantial, new insights and applications from a holistic analysis of communications networks.
Max ERC Funding
1 316 408 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym VASCULARGROWTH
Project Bioengineering prediction of three-dimensional vascular growth and remodeling in embryonic great-vessel development
Researcher (PI) Kerem Pekkan
Host Institution (HI) KOC UNIVERSITY
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Globally 1 in 100 children are born with significant congenital heart defects (CHD), representing either new genetic mutations or epigenetic insults that alter cardiac morphogenesis in utero. Embryonic CV systems dynamically regulate structure and function over very short time periods throughout morphogenesis and that biomechanical loading conditions within the heart and great-vessels alter morphogenesis and gene expression. This proposal has structured around a common goal of developing a comprehensive and predictive understanding of the biomechanics and regulation of great-vessel development and its plasticity in response to clinically relevant epigenetic changes in loading conditions. Biomechanical regulation of vascular morphogenesis, including potential aortic arch (AA) reversibility or plasticity after epigenetic events relevant to human CHD are investigated using multimodal experiments in the chick embryo that investigate normal AA growth and remodeling, microsurgical instrumentation that alter ventricular and vascular blood flow loading during critical periods in AA morphogenesis. WP 1 establishes our novel optimization framework, incorporates basic input/output in vivo data sets, and validates. In WP 2 and 3 the numerical models for perturbed biomechanical environment and incorporate new objective functions that have in vivo structural data inputs and predict changes in structure and function. WP 4 incorporates candidate genes and pathways during normal and experimentally altered AA morphogenesis. This proposal develops and validates the first in vivo morphomechanics-integrated three-dimensional mathematical models of AA growth and remodeling that can predict normal growth patterns and abnormal vascular adaptations common in CHD. Multidisciplinary application of bioengineering principles to CHD is likely to provide novel insights and paradigms towards our long-term goal of optimizing CHD interventions, outcomes, and the potential for preventive strategies.
Summary
Globally 1 in 100 children are born with significant congenital heart defects (CHD), representing either new genetic mutations or epigenetic insults that alter cardiac morphogenesis in utero. Embryonic CV systems dynamically regulate structure and function over very short time periods throughout morphogenesis and that biomechanical loading conditions within the heart and great-vessels alter morphogenesis and gene expression. This proposal has structured around a common goal of developing a comprehensive and predictive understanding of the biomechanics and regulation of great-vessel development and its plasticity in response to clinically relevant epigenetic changes in loading conditions. Biomechanical regulation of vascular morphogenesis, including potential aortic arch (AA) reversibility or plasticity after epigenetic events relevant to human CHD are investigated using multimodal experiments in the chick embryo that investigate normal AA growth and remodeling, microsurgical instrumentation that alter ventricular and vascular blood flow loading during critical periods in AA morphogenesis. WP 1 establishes our novel optimization framework, incorporates basic input/output in vivo data sets, and validates. In WP 2 and 3 the numerical models for perturbed biomechanical environment and incorporate new objective functions that have in vivo structural data inputs and predict changes in structure and function. WP 4 incorporates candidate genes and pathways during normal and experimentally altered AA morphogenesis. This proposal develops and validates the first in vivo morphomechanics-integrated three-dimensional mathematical models of AA growth and remodeling that can predict normal growth patterns and abnormal vascular adaptations common in CHD. Multidisciplinary application of bioengineering principles to CHD is likely to provide novel insights and paradigms towards our long-term goal of optimizing CHD interventions, outcomes, and the potential for preventive strategies.
Max ERC Funding
1 995 140 €
Duration
Start date: 2013-01-01, End date: 2019-07-31
Project acronym VERISYNTH
Project Automatic Synthesis of Software Verification Tools from Proof Rules
Researcher (PI) Andrey Rybalchenko
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Software complexity is growing, so is the demand for software verification. Soon, perhaps within a decade, wide deployment of software verification tools will be indispensable or even mandatory to ensure software reliability in a large number of application domains, including but not restricted to safety and security critical systems. To adequately respond to the demand we need to eliminate tedious aspects of software verifier development, while providing support for the accomplishment of creative aspects. We believe that the next generation of software verifiers will be constructed from logical specifications designed by quality/verification engineers with expertise in the application domain. Give a specification describing a verification method, a corresponding software verifier will be obtained by implementing a frontend that translates software source code into constraints according to the specification and then coupling the frontend with a highly-tuned general-purpose constraint solver, thus eliminating the need for algorithmic implementation efforts from the ground up. This project proposes the necessary methodology, solving algorithms, and tools for building verifiers of the future.
Summary
Software complexity is growing, so is the demand for software verification. Soon, perhaps within a decade, wide deployment of software verification tools will be indispensable or even mandatory to ensure software reliability in a large number of application domains, including but not restricted to safety and security critical systems. To adequately respond to the demand we need to eliminate tedious aspects of software verifier development, while providing support for the accomplishment of creative aspects. We believe that the next generation of software verifiers will be constructed from logical specifications designed by quality/verification engineers with expertise in the application domain. Give a specification describing a verification method, a corresponding software verifier will be obtained by implementing a frontend that translates software source code into constraints according to the specification and then coupling the frontend with a highly-tuned general-purpose constraint solver, thus eliminating the need for algorithmic implementation efforts from the ground up. This project proposes the necessary methodology, solving algorithms, and tools for building verifiers of the future.
Max ERC Funding
1 476 562 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym VISCUL
Project Visual Culture for Image Understanding
Researcher (PI) Vittorio Ferrari
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary The goal of computer vision is to interpret complex visual scenes, by recognizing objects and understanding their spatial arrangement within the scene. Achieving this involves learning
categories from annotated training images. In the current paradigm, each category is learned starting from scratch without any previous knowledge. This is in contrast with how humans learn, who accumulate knowledge about visual concepts which they reuse to help learning new concepts.
The goal of this project is to develop a new paradigm where computers learn visual concepts on top of what they already know, as opposed to learning every concept from scratch. We propose to progressively learn a vast body of visual knowledge, coined Visual Culture, from a variety of available datasets. We will acquire models of the appearance and shape of categories in general, models of specific categories, and models of their spatial organization into scenes. We will start learning from datasets with high degree of supervision and then gradually move to datasets with lower degrees. At each stage we will employ the current body of knowledge to support learning with less supervision. After acquiring Visual Culture from existing datasets, the machine will be ready to learn further with little or no supervision, for example from the Internet. Visual Culture is related to ideas in other fields, but no similar endeavor was undertaken in Computer Vision yet.
This project will make an important step toward mastering the complexity of the visual world, by advancing the state-of-the-art in terms of the number of categories that can be localized, and in
the variability covered by each model. Moreover, Visual Culture is more than a mere collection of isolated categories, it is is a web of object, background, and scene models connected by spatial relations and sharing visual properties. This will bring us closer to image understanding, the automatic interpretation of complex novel images.
Summary
The goal of computer vision is to interpret complex visual scenes, by recognizing objects and understanding their spatial arrangement within the scene. Achieving this involves learning
categories from annotated training images. In the current paradigm, each category is learned starting from scratch without any previous knowledge. This is in contrast with how humans learn, who accumulate knowledge about visual concepts which they reuse to help learning new concepts.
The goal of this project is to develop a new paradigm where computers learn visual concepts on top of what they already know, as opposed to learning every concept from scratch. We propose to progressively learn a vast body of visual knowledge, coined Visual Culture, from a variety of available datasets. We will acquire models of the appearance and shape of categories in general, models of specific categories, and models of their spatial organization into scenes. We will start learning from datasets with high degree of supervision and then gradually move to datasets with lower degrees. At each stage we will employ the current body of knowledge to support learning with less supervision. After acquiring Visual Culture from existing datasets, the machine will be ready to learn further with little or no supervision, for example from the Internet. Visual Culture is related to ideas in other fields, but no similar endeavor was undertaken in Computer Vision yet.
This project will make an important step toward mastering the complexity of the visual world, by advancing the state-of-the-art in terms of the number of categories that can be localized, and in
the variability covered by each model. Moreover, Visual Culture is more than a mere collection of isolated categories, it is is a web of object, background, and scene models connected by spatial relations and sharing visual properties. This will bring us closer to image understanding, the automatic interpretation of complex novel images.
Max ERC Funding
1 481 516 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym VISLIM
Project Visual Learning and Inference in Joint Scene Models
Researcher (PI) Stefan Roth
Host Institution (HI) TECHNISCHE UNIVERSITAT DARMSTADT
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "One of the principal difficulties in processing, analyzing, and interpreting digital images is that many attributes of visual scenes relate in complex manners. Despite that, the vast majority of today's top-performing computer vision approaches estimate a particular attribute (e.g., motion, scene segmentation, restored image, object presence, etc.) in isolation; other pertinent attributes are either ignored or crudely pre-computed by ignoring any mutual relation. But since estimating a singular attribute of a visual scene from images is often highly ambiguous, there is substantial potential benefit in estimating several attributes jointly.
The goal of this project is to develop the foundations of modeling, learning and inference in rich, joint representations of visual scenes that naturally encompass several of the pertinent scene attributes. Importantly, this goes beyond combining multiple cues, but rather aims at modeling and inferring multiple scene attributes jointly to take advantage of their interplay and their mutual reinforcement, ultimately working toward a full(er) understanding of visual scenes. While the basic idea of using joint representations of visual scenes has a long history, it has only rarely come to fruition. VISLIM aims to significantly push the current state of the art by developing a more general and versatile toolbox for joint scene modeling that addresses heterogeneous visual representations (discrete and continuous, dense and sparse) as well as a wide range of levels of abstractions (from the pixel level to high-level abstractions). This is expected to lead joint scene models beyond conceptual appeal to practical impact and top-level application performance. No other endeavor in computer vision has attempted to develop a similarly broad foundation for joint scene modeling. In doing so we aim to move closer to image understanding, with significant potential impact in other disciplines of science, technology and humanities."
Summary
"One of the principal difficulties in processing, analyzing, and interpreting digital images is that many attributes of visual scenes relate in complex manners. Despite that, the vast majority of today's top-performing computer vision approaches estimate a particular attribute (e.g., motion, scene segmentation, restored image, object presence, etc.) in isolation; other pertinent attributes are either ignored or crudely pre-computed by ignoring any mutual relation. But since estimating a singular attribute of a visual scene from images is often highly ambiguous, there is substantial potential benefit in estimating several attributes jointly.
The goal of this project is to develop the foundations of modeling, learning and inference in rich, joint representations of visual scenes that naturally encompass several of the pertinent scene attributes. Importantly, this goes beyond combining multiple cues, but rather aims at modeling and inferring multiple scene attributes jointly to take advantage of their interplay and their mutual reinforcement, ultimately working toward a full(er) understanding of visual scenes. While the basic idea of using joint representations of visual scenes has a long history, it has only rarely come to fruition. VISLIM aims to significantly push the current state of the art by developing a more general and versatile toolbox for joint scene modeling that addresses heterogeneous visual representations (discrete and continuous, dense and sparse) as well as a wide range of levels of abstractions (from the pixel level to high-level abstractions). This is expected to lead joint scene models beyond conceptual appeal to practical impact and top-level application performance. No other endeavor in computer vision has attempted to develop a similarly broad foundation for joint scene modeling. In doing so we aim to move closer to image understanding, with significant potential impact in other disciplines of science, technology and humanities."
Max ERC Funding
1 374 030 €
Duration
Start date: 2013-06-01, End date: 2018-05-31
Project acronym VSSC
Project Verifying and Synthesizing Software Compositions
Researcher (PI) Shmuel (Mooly) Sagiv
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary One of the first things a programmer must commit to in developing any significant piece of software is the representation of the data. In applications where performance or memory consumption is important, this representation is often quite complex: the data may be indexed in multiple ways and use a variety of concrete, interlinked data structures. The current situation, in which programmers either directly write these data structures themselves or use a standard data structure library, leads to two problems:
1:The particular choice of data representation is based on an expectation of what the most common workloads will be; that is, the programmer has already made cost-benefit trade-offs based on the expected distribution of operations the program will perform on these data structures.
2: It is difficult for the programmer to check or even express the high-level consistency properties of complex structures, especially when these structures are shared. This also makes software verification in existing programming languages very hard.
We will investigate specification languages for describing and reasoning program data at a much higher level. The hope is that this can reduce the inherited complexity of reasoning about programs. In tandem, we will check if the high level specifications can be semi-automatically mapped specifications to efficient data representations.
A novel aspect of our approach allows the user to define global invariants and a restricted set of high level operations, and only then to synthesize a representation that both adheres to the invariants and is highly specialized to exactly the set of operations the user requires. In contrast, the classical approach in databases is to assume nothing about the queries that must be answered; the representation must support all possible operations.
Summary
One of the first things a programmer must commit to in developing any significant piece of software is the representation of the data. In applications where performance or memory consumption is important, this representation is often quite complex: the data may be indexed in multiple ways and use a variety of concrete, interlinked data structures. The current situation, in which programmers either directly write these data structures themselves or use a standard data structure library, leads to two problems:
1:The particular choice of data representation is based on an expectation of what the most common workloads will be; that is, the programmer has already made cost-benefit trade-offs based on the expected distribution of operations the program will perform on these data structures.
2: It is difficult for the programmer to check or even express the high-level consistency properties of complex structures, especially when these structures are shared. This also makes software verification in existing programming languages very hard.
We will investigate specification languages for describing and reasoning program data at a much higher level. The hope is that this can reduce the inherited complexity of reasoning about programs. In tandem, we will check if the high level specifications can be semi-automatically mapped specifications to efficient data representations.
A novel aspect of our approach allows the user to define global invariants and a restricted set of high level operations, and only then to synthesize a representation that both adheres to the invariants and is highly specialized to exactly the set of operations the user requires. In contrast, the classical approach in databases is to assume nothing about the queries that must be answered; the representation must support all possible operations.
Max ERC Funding
1 577 200 €
Duration
Start date: 2013-04-01, End date: 2018-03-31
Project acronym XFLOW
Project Ultrafast X-Ray Tomography of Turbulent Bubble Flows
Researcher (PI) Markus Schubert
Host Institution (HI) HELMHOLTZ-ZENTRUM DRESDEN-ROSSENDORF EV
Call Details Starting Grant (StG), PE8, ERC-2012-StG_20111012
Summary Multiphase reactors are omnipresent in chemical engineering and dominate today's manufacturing of chemical products such that they are present in most of our daily products. That implies a huge economic and ecologic impact of the reactor performance. The basic idea of a multiphase reactor is to contact chemical precursors and catalysts in a sufficient time for the reaction to proceed, but reactor performance is crucially affected by the complex reactor hydrodynamics. A proper optimization would imply that multiphase flows are adequately understood.
Gas bubbled into a pool of liquid is the simplest example of a multiphase reactor. Bubble columns or distillation columns, however, house millions of bubbles emerging in swarms with interactions such as coalescence and breakage events that determine the whole process behaviour. The understanding of such disperse gas-liquid flows is still fragmentary and requires a ground-breaking update.
The aim of the project is to apply the worldwide fastest tomographic imaging method to study such turbulent gas-liquid dispersed flows in column reactors such as bubble columns and tray columns. The project intends to provide unique insights into the bubble swarm behaviour at operating conditions that have been hidden so far from the engineer's eyes.
The project is foreseen to enhance the fundamental understanding of hydrodynamic parameters, evolving flow patterns and coherent structures as well as coalescence and breakage mechanisms, regardless of if the systems are pressurized, filled with particle packings, operated with organic liquid, slurries or with internals.
The interdisciplinary team shall re-establish the process intensification route for multiphase reactors by a new understanding of small-scale phenomena, their mathematical description and extrapolation towards the reactor scale and therewith providing a tool for reactor optimization.
Summary
Multiphase reactors are omnipresent in chemical engineering and dominate today's manufacturing of chemical products such that they are present in most of our daily products. That implies a huge economic and ecologic impact of the reactor performance. The basic idea of a multiphase reactor is to contact chemical precursors and catalysts in a sufficient time for the reaction to proceed, but reactor performance is crucially affected by the complex reactor hydrodynamics. A proper optimization would imply that multiphase flows are adequately understood.
Gas bubbled into a pool of liquid is the simplest example of a multiphase reactor. Bubble columns or distillation columns, however, house millions of bubbles emerging in swarms with interactions such as coalescence and breakage events that determine the whole process behaviour. The understanding of such disperse gas-liquid flows is still fragmentary and requires a ground-breaking update.
The aim of the project is to apply the worldwide fastest tomographic imaging method to study such turbulent gas-liquid dispersed flows in column reactors such as bubble columns and tray columns. The project intends to provide unique insights into the bubble swarm behaviour at operating conditions that have been hidden so far from the engineer's eyes.
The project is foreseen to enhance the fundamental understanding of hydrodynamic parameters, evolving flow patterns and coherent structures as well as coalescence and breakage mechanisms, regardless of if the systems are pressurized, filled with particle packings, operated with organic liquid, slurries or with internals.
The interdisciplinary team shall re-establish the process intensification route for multiphase reactors by a new understanding of small-scale phenomena, their mathematical description and extrapolation towards the reactor scale and therewith providing a tool for reactor optimization.
Max ERC Funding
1 172 640 €
Duration
Start date: 2013-01-01, End date: 2016-12-31