Project acronym AGNOSTIC
Project Actively Enhanced Cognition based Framework for Design of Complex Systems
Researcher (PI) Bjoern Ottersten
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Advanced Grant (AdG), PE7, ERC-2016-ADG
Summary Parameterized mathematical models have been central to the understanding and design of communication, networking, and radar systems. However, they often lack the ability to model intricate interactions innate in complex systems. On the other hand, data-driven approaches do not need explicit mathematical models for data generation and have a wider applicability at the cost of flexibility. These approaches need labelled data, representing all the facets of the system interaction with the environment. With the aforementioned systems becoming increasingly complex with intricate interactions and operating in dynamic environments, the number of system configurations can be rather large leading to paucity of labelled data. Thus there are emerging networks of systems of critical importance whose cognition is not effectively covered by traditional approaches. AGNOSTIC uses the process of exploration through system probing and exploitation of observed data in an iterative manner drawing upon traditional model-based approaches and data-driven discriminative learning to enhance functionality, performance, and robustness through the notion of active cognition. AGNOSTIC clearly departs from a passive assimilation of data and aims to formalize the exploitation/exploration framework in dynamic environments. The development of this framework in three applications areas is central to AGNOSTIC. The project aims to provide active cognition in radar to learn the environment and other active systems to ensure situational awareness and coexistence; to apply active probing in radio access networks to infer network behaviour towards spectrum sharing and self-configuration; and to learn and adapt to user demand for content distribution in caching networks, drastically improving network efficiency. Although these cognitive systems interact with the environment in very different ways, sufficient abstraction allows cross-fertilization of insights and approaches motivating their joint treatment.
Summary
Parameterized mathematical models have been central to the understanding and design of communication, networking, and radar systems. However, they often lack the ability to model intricate interactions innate in complex systems. On the other hand, data-driven approaches do not need explicit mathematical models for data generation and have a wider applicability at the cost of flexibility. These approaches need labelled data, representing all the facets of the system interaction with the environment. With the aforementioned systems becoming increasingly complex with intricate interactions and operating in dynamic environments, the number of system configurations can be rather large leading to paucity of labelled data. Thus there are emerging networks of systems of critical importance whose cognition is not effectively covered by traditional approaches. AGNOSTIC uses the process of exploration through system probing and exploitation of observed data in an iterative manner drawing upon traditional model-based approaches and data-driven discriminative learning to enhance functionality, performance, and robustness through the notion of active cognition. AGNOSTIC clearly departs from a passive assimilation of data and aims to formalize the exploitation/exploration framework in dynamic environments. The development of this framework in three applications areas is central to AGNOSTIC. The project aims to provide active cognition in radar to learn the environment and other active systems to ensure situational awareness and coexistence; to apply active probing in radio access networks to infer network behaviour towards spectrum sharing and self-configuration; and to learn and adapt to user demand for content distribution in caching networks, drastically improving network efficiency. Although these cognitive systems interact with the environment in very different ways, sufficient abstraction allows cross-fertilization of insights and approaches motivating their joint treatment.
Max ERC Funding
2 499 595 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym BeStMo
Project Beyond Static Molecules: Modeling Quantum Fluctuations in Complex Molecular Environments
Researcher (PI) Alexandre TKATCHENKO
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Consolidator Grant (CoG), PE4, ERC-2016-COG
Summary We propose focused theory developments and applications, which aim to substantially advance our ability to model and understand the behavior of molecules in complex environments. From a large repertoire of possible environments, we have chosen to concentrate on experimentally-relevant situations, including molecular fluctuations in electric and optical fields, disordered molecular crystals, solvated (bio)molecules, and molecular interactions at/through low-dimensional nanostructures. A challenging aspect of modeling such realistic environments is that both molecular electronic and nuclear fluctuations have to be treated efficiently at a robust quantum-mechanical level of theory for systems with 1000s of atoms. In contrast, the current state of the art in the modeling of complex molecular systems typically consists of Newtonian molecular dynamics employing classical force fields. We will develop radically new approaches for electronic and nuclear fluctuations that unify concepts and merge techniques from quantum-mechanical many-body Hamiltonians, statistical mechanics, density-functional theory, and machine learning. Our developments will be benchmarked using experimental measurements with terahertz (THz) spectroscopy, atomic-force and scanning tunneling microscopy (AFM/STM), time-of-flight (TOF) measurements, and molecular interferometry.
Our final goal is to bridge the accuracy of quantum mechanics with the efficiency of force fields, enabling large-scale predictive quantum molecular dynamics simulations for complex systems containing 1000s of atoms, and leading to novel conceptual insights into quantum-mechanical fluctuations in large molecular systems. The project goes well beyond the presently possible applications and once successful will pave the road towards having a suite of first-principles-based modeling tools for a wide range of realistic materials, such as biomolecules, nanostructures, disordered solids, and organic/inorganic interfaces.
Summary
We propose focused theory developments and applications, which aim to substantially advance our ability to model and understand the behavior of molecules in complex environments. From a large repertoire of possible environments, we have chosen to concentrate on experimentally-relevant situations, including molecular fluctuations in electric and optical fields, disordered molecular crystals, solvated (bio)molecules, and molecular interactions at/through low-dimensional nanostructures. A challenging aspect of modeling such realistic environments is that both molecular electronic and nuclear fluctuations have to be treated efficiently at a robust quantum-mechanical level of theory for systems with 1000s of atoms. In contrast, the current state of the art in the modeling of complex molecular systems typically consists of Newtonian molecular dynamics employing classical force fields. We will develop radically new approaches for electronic and nuclear fluctuations that unify concepts and merge techniques from quantum-mechanical many-body Hamiltonians, statistical mechanics, density-functional theory, and machine learning. Our developments will be benchmarked using experimental measurements with terahertz (THz) spectroscopy, atomic-force and scanning tunneling microscopy (AFM/STM), time-of-flight (TOF) measurements, and molecular interferometry.
Our final goal is to bridge the accuracy of quantum mechanics with the efficiency of force fields, enabling large-scale predictive quantum molecular dynamics simulations for complex systems containing 1000s of atoms, and leading to novel conceptual insights into quantum-mechanical fluctuations in large molecular systems. The project goes well beyond the presently possible applications and once successful will pave the road towards having a suite of first-principles-based modeling tools for a wide range of realistic materials, such as biomolecules, nanostructures, disordered solids, and organic/inorganic interfaces.
Max ERC Funding
1 811 650 €
Duration
Start date: 2017-03-01, End date: 2022-08-31
Project acronym CLEANH2
Project Chemical Engineering of Fused MetalloPorphyrins Thin Films for the Clean Production of Hydrogen
Researcher (PI) Nicolas BOSCHER
Host Institution (HI) LUXEMBOURG INSTITUTE OF SCIENCE AND TECHNOLOGY
Country Luxembourg
Call Details Consolidator Grant (CoG), PE8, ERC-2019-COG
Summary This project stands in the general context of the current worldwide energy and environmental crisis. It aims to engineer a new generation of conjugated microporous polymers based on fused metalloporphyrins for the low-cost, clean and efficient production of hydrogen from solar water splitting. The CLEANH2 concept relies on the gas phase reaction of metalloporphyrins to engineer new heterogeneous catalysts with remarkable hydrogen production yields. Metalloporphyrins, selected by Nature to fulfil the main catalytic phenomena allowing life, are attractive molecules for water splitting owing to their highly conjugated structure and central metal ion, which can readily interconvert between different oxidation states to accomplish oxidation and reduction reactions. For efficiency and sustainability considerations, it is highly desirable to employ metalloporphyrins in conductive assemblies for heterogeneous catalysis. Nevertheless, due to the lack of synthetic approach, the design and application of conjugated porphyrin assemblies is a largely unexplored topic in view of the plethora of available porphyrin patterns.
The central idea of CLEANH2 builds upon our recent advance in the gas phase synthesis and deposition of directly fused metalloporphyrins coatings. Progress in our approach is expected to open the way for the construction of powerful catalytic and photocatalytic materials. To achieve this, the key challenging goals of this project are: 1) the engineering of the microstructure and electronic structure of directly fused metalloporphyrins thin films; 2) the use of the full potential of directly fused metalloporphyrins thin films for the unmet, clean and high quantum yield overall water splitting for hydrogen production. The outcomes of CLEANH2 will be foundational for the engineering of directly fused metalloporphyrins systems and their implementation in advanced technological applications related to catalysis and solar energy.
Summary
This project stands in the general context of the current worldwide energy and environmental crisis. It aims to engineer a new generation of conjugated microporous polymers based on fused metalloporphyrins for the low-cost, clean and efficient production of hydrogen from solar water splitting. The CLEANH2 concept relies on the gas phase reaction of metalloporphyrins to engineer new heterogeneous catalysts with remarkable hydrogen production yields. Metalloporphyrins, selected by Nature to fulfil the main catalytic phenomena allowing life, are attractive molecules for water splitting owing to their highly conjugated structure and central metal ion, which can readily interconvert between different oxidation states to accomplish oxidation and reduction reactions. For efficiency and sustainability considerations, it is highly desirable to employ metalloporphyrins in conductive assemblies for heterogeneous catalysis. Nevertheless, due to the lack of synthetic approach, the design and application of conjugated porphyrin assemblies is a largely unexplored topic in view of the plethora of available porphyrin patterns.
The central idea of CLEANH2 builds upon our recent advance in the gas phase synthesis and deposition of directly fused metalloporphyrins coatings. Progress in our approach is expected to open the way for the construction of powerful catalytic and photocatalytic materials. To achieve this, the key challenging goals of this project are: 1) the engineering of the microstructure and electronic structure of directly fused metalloporphyrins thin films; 2) the use of the full potential of directly fused metalloporphyrins thin films for the unmet, clean and high quantum yield overall water splitting for hydrogen production. The outcomes of CLEANH2 will be foundational for the engineering of directly fused metalloporphyrins systems and their implementation in advanced technological applications related to catalysis and solar energy.
Max ERC Funding
1 900 711 €
Duration
Start date: 2020-05-01, End date: 2025-04-30
Project acronym CLOUDMAP
Project Cloud Computing via Homomorphic Encryption and Multilinear Maps
Researcher (PI) Jean-Sebastien Coron
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary The past thirty years have seen cryptography move from arcane to commonplace: Internet, mobile phones, banking system, etc. Homomorphic cryptography now offers the tantalizing goal of being able to process sensitive information in encrypted form, without needing to compromise on the privacy and security of the citizens and organizations that provide the input data. More recently, cryptographic multilinear maps have revolutionized cryptography with the emergence of indistinguishability obfuscation (iO), which in theory can been used to realize numerous advanced cryptographic functionalities that previously seemed beyond reach. However the security of multilinear maps is still poorly understood, and many iO schemes have been broken; moreover all constructions of iO are currently unpractical.
The goal of the CLOUDMAP project is to make these advanced cryptographic tasks usable in practice, so that citizens do not have to compromise on the privacy and security of their input data. This goal can only be achieved by considering the mathematical foundations of these primitives, working "from first principles", rather than focusing on premature optimizations. To achieve this goal, our first objective will be to better understand the security of the underlying primitives of multilinear maps and iO schemes. Our second objective will be to develop new approaches to significantly improve their efficiency. Our third objective will be to build applications of multilinear maps and iO that can be implemented in practice.
Summary
The past thirty years have seen cryptography move from arcane to commonplace: Internet, mobile phones, banking system, etc. Homomorphic cryptography now offers the tantalizing goal of being able to process sensitive information in encrypted form, without needing to compromise on the privacy and security of the citizens and organizations that provide the input data. More recently, cryptographic multilinear maps have revolutionized cryptography with the emergence of indistinguishability obfuscation (iO), which in theory can been used to realize numerous advanced cryptographic functionalities that previously seemed beyond reach. However the security of multilinear maps is still poorly understood, and many iO schemes have been broken; moreover all constructions of iO are currently unpractical.
The goal of the CLOUDMAP project is to make these advanced cryptographic tasks usable in practice, so that citizens do not have to compromise on the privacy and security of their input data. This goal can only be achieved by considering the mathematical foundations of these primitives, working "from first principles", rather than focusing on premature optimizations. To achieve this goal, our first objective will be to better understand the security of the underlying primitives of multilinear maps and iO schemes. Our second objective will be to develop new approaches to significantly improve their efficiency. Our third objective will be to build applications of multilinear maps and iO that can be implemented in practice.
Max ERC Funding
2 491 266 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym INTERACT
Project Intelligent Non-woven Textiles and Elastomeric Responsive materials by Advancing liquid Crystal Technology
Researcher (PI) Jan Peter Felix Lagerwall
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Consolidator Grant (CoG), PE8, ERC-2014-CoG
Summary A grand challenge in today’s materials research is the realization of flexible materials that are also intelligent and functional. They will be the enablers of true breakthroughs in the hot trends of soft robotics and wearable technology. The standard approach to the latter is to decorate rubber sheets with electronic components, yielding two serious flaws: rubber is uncomfortable as it does not breath and solid state electronics will eventually fail as a garment is flexed and stretched when worn. While the softness of rubber is ideal it must be used in the form of textile fibers to provide breathability, and for long-term failure resistance we need intelligent components that are soft. A solution to this conundrum was recently presented by the PI with the concept of liquid crystal (LC) electrospinning. The extreme responsiveness of LCs is transferred to a non-woven textile by incorporating the LC in the fiber core, yielding a smart flexible mat with sensory function. Moreover, it consumes no power, providing a further advantage over electronics-based approaches. In a second research line he uses microfluidics to make LC rubber microshells, functioning as autonomous actuators which may serve as innovative components for soft robotics, and photonic crystal shells. This interdisciplinary project presents an ambitious agenda to advance these new concepts to the realization of soft, stretchable intelligent materials of revolutionary character. Five specific objectives are in focus: 1) develop understanding of the dynamic response of LCs in these unconventional configurations; 2) establish interaction dynamics during polymerisation of an LC precursor; 3) elucidate LC response to gas exposure; 4) establish correlation between actuation response and internal order of curved LCE rubbers; and 5) assess usefulness of LC-functionalized fibers and polymerized LC shells, tubes and Janus particles in wearable sensors, soft robotic actuators and high-security identification tags.
Summary
A grand challenge in today’s materials research is the realization of flexible materials that are also intelligent and functional. They will be the enablers of true breakthroughs in the hot trends of soft robotics and wearable technology. The standard approach to the latter is to decorate rubber sheets with electronic components, yielding two serious flaws: rubber is uncomfortable as it does not breath and solid state electronics will eventually fail as a garment is flexed and stretched when worn. While the softness of rubber is ideal it must be used in the form of textile fibers to provide breathability, and for long-term failure resistance we need intelligent components that are soft. A solution to this conundrum was recently presented by the PI with the concept of liquid crystal (LC) electrospinning. The extreme responsiveness of LCs is transferred to a non-woven textile by incorporating the LC in the fiber core, yielding a smart flexible mat with sensory function. Moreover, it consumes no power, providing a further advantage over electronics-based approaches. In a second research line he uses microfluidics to make LC rubber microshells, functioning as autonomous actuators which may serve as innovative components for soft robotics, and photonic crystal shells. This interdisciplinary project presents an ambitious agenda to advance these new concepts to the realization of soft, stretchable intelligent materials of revolutionary character. Five specific objectives are in focus: 1) develop understanding of the dynamic response of LCs in these unconventional configurations; 2) establish interaction dynamics during polymerisation of an LC precursor; 3) elucidate LC response to gas exposure; 4) establish correlation between actuation response and internal order of curved LCE rubbers; and 5) assess usefulness of LC-functionalized fibers and polymerized LC shells, tubes and Janus particles in wearable sensors, soft robotic actuators and high-security identification tags.
Max ERC Funding
1 929 976 €
Duration
Start date: 2015-04-01, End date: 2020-03-31
Project acronym NanoThermo
Project Energy Conversion and Information Processing at Small Scales
Researcher (PI) Massimiliano Gennaro Esposito
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Consolidator Grant (CoG), PE3, ERC-2015-CoG
Summary Thermodynamics provided mankind with the intellectual tools to master energy transfers and energy conversion in macroscopic systems operating close to equilibrium. It is now one of the most fundamental theories in physics. My goal is to establish a thermodynamic theory describing energy conversion and information processing in small synthetic or biological systems operating far from equilibrium. Significant progress has been achieved in this direction over the last decade. The new theory is called stochastic thermodynamics (ST). It allows us to describe and understand energy conversion in systems as diverse as quantum junctions and molecular motors, and also to predict the energetic cost of information processing operations such as erasing bits of information or feedback controlling a small device. It was validated in single molecule pulling experiments, electronic circuits, NMR and colloidal particles in optical tweezers. Nevertheless, ST still suffers from serious limitations which prevent its application in more complex systems. Therefore, I propose to expand the theoretical foundations of ST far beyond its current realm of validity and to broaden the scope of its applications in various new directions. I want to answer questions such as: Can one design devices made of many small energy converters (e.g. thermoelectric junctions) arranged in such a way as to generate collective behaviors (e.g. synchronization) prompting higher powers and efficiencies? Can one do the same by engineer quantum effects? How can one reduce the dissipation occurring when computing very quickly with small devices? Why are metabolic networks so efficient in converting energy, transmitting information, and preventing errors (e.g. toxic byproducts)? I will do so in close contact with leading experimental groups in the field. My conviction is that ST will become as important for nanotechnologies and molecular biology as thermodynamics has been for the industrial revolution.
Summary
Thermodynamics provided mankind with the intellectual tools to master energy transfers and energy conversion in macroscopic systems operating close to equilibrium. It is now one of the most fundamental theories in physics. My goal is to establish a thermodynamic theory describing energy conversion and information processing in small synthetic or biological systems operating far from equilibrium. Significant progress has been achieved in this direction over the last decade. The new theory is called stochastic thermodynamics (ST). It allows us to describe and understand energy conversion in systems as diverse as quantum junctions and molecular motors, and also to predict the energetic cost of information processing operations such as erasing bits of information or feedback controlling a small device. It was validated in single molecule pulling experiments, electronic circuits, NMR and colloidal particles in optical tweezers. Nevertheless, ST still suffers from serious limitations which prevent its application in more complex systems. Therefore, I propose to expand the theoretical foundations of ST far beyond its current realm of validity and to broaden the scope of its applications in various new directions. I want to answer questions such as: Can one design devices made of many small energy converters (e.g. thermoelectric junctions) arranged in such a way as to generate collective behaviors (e.g. synchronization) prompting higher powers and efficiencies? Can one do the same by engineer quantum effects? How can one reduce the dissipation occurring when computing very quickly with small devices? Why are metabolic networks so efficient in converting energy, transmitting information, and preventing errors (e.g. toxic byproducts)? I will do so in close contact with leading experimental groups in the field. My conviction is that ST will become as important for nanotechnologies and molecular biology as thermodynamics has been for the industrial revolution.
Max ERC Funding
1 669 029 €
Duration
Start date: 2016-07-01, End date: 2021-12-31
Project acronym NATURAL
Project Natural Program Repair
Researcher (PI) Tegawende F. Bissyande
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Starting Grant (StG), PE6, ERC-2020-STG
Summary Automatic bug fixing, i.e., the idea of having programs that fix other programs, is a long-standing dream that is increasingly embraced by the software engineering community. Indeed, despite the significant effort that humans put into reviewing code and running software test campaigns, programming mistakes slip by, with severe consequences. Fixing those mistakes automatically has recently been the focus of a number of potentially promising techniques. Proposed approaches are however recurrently criticized as being shallow (i.e., they mostly address unit test failures, which are often neither hard nor important problems).
Initial successes in automatic bug fixing are based on scenarios such as the following: when a bug is localized, patches are generated repetitively and automatically, through trial and error, until a valid patch is produced. The produced patch could then be later revised by developers. While the reported achievements are certainly worthwhile, they do not address what we believe is a more comprehensive challenge of software engineering: to systematically fix features of a software system based on end-user requirements.
The ambition of NATURAL is to develop a methodology for yielding an intelligent agent that is capable of receiving a natural language description of a problem that a user faces with a software feature, and then synthesizing code to address this problem so that it meets the user's expectations. Such a repair bot would be a trustworthy software contributor that is 1) first, targeting real bugs in production via exploiting bug reports, which remain largely under-explored, 2) second, aligning with the conversational needs of collaborative work via generating explanations for patch suggestions, 3) third, shifting the repair paradigm towards the design of self-improving systems via yielding novel algorithms that iteratively integrate feedback from humans. Ultimately, NATURAL will be transformative in the practice of software engineering.
Summary
Automatic bug fixing, i.e., the idea of having programs that fix other programs, is a long-standing dream that is increasingly embraced by the software engineering community. Indeed, despite the significant effort that humans put into reviewing code and running software test campaigns, programming mistakes slip by, with severe consequences. Fixing those mistakes automatically has recently been the focus of a number of potentially promising techniques. Proposed approaches are however recurrently criticized as being shallow (i.e., they mostly address unit test failures, which are often neither hard nor important problems).
Initial successes in automatic bug fixing are based on scenarios such as the following: when a bug is localized, patches are generated repetitively and automatically, through trial and error, until a valid patch is produced. The produced patch could then be later revised by developers. While the reported achievements are certainly worthwhile, they do not address what we believe is a more comprehensive challenge of software engineering: to systematically fix features of a software system based on end-user requirements.
The ambition of NATURAL is to develop a methodology for yielding an intelligent agent that is capable of receiving a natural language description of a problem that a user faces with a software feature, and then synthesizing code to address this problem so that it meets the user's expectations. Such a repair bot would be a trustworthy software contributor that is 1) first, targeting real bugs in production via exploiting bug reports, which remain largely under-explored, 2) second, aligning with the conversational needs of collaborative work via generating explanations for patch suggestions, 3) third, shifting the repair paradigm towards the design of self-improving systems via yielding novel algorithms that iteratively integrate feedback from humans. Ultimately, NATURAL will be transformative in the practice of software engineering.
Max ERC Funding
1 495 988 €
Duration
Start date: 2021-02-01, End date: 2026-01-31
Project acronym RealTCut
Project Towards real time multiscale simulation of cutting in non-linear materials
with applications to surgical simulation and computer guided surgery
Researcher (PI) Stephane Pierre Alain Bordas
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Starting Grant (StG), PE8, ERC-2011-StG_20101014
Summary "Surgeons are trained as apprentices. Some conditions are rarely encountered and surgeons will only be trained in the specific skills associated with a given situation if they come across it. At the end of their residency, it is hoped that they will have faced sufficiently many cases to be competent. This can be dangerous to the patients.
If we were able to reproduce faithfully, in a virtual environment, the audio, visual and haptic experience of a surgeon as they prod, pull and incise tissue, then, surgeons would not have to train on cadavers, phantoms, or on the patients themselves.
Only a few researchers in the Computational Mechanics community have attacked the mechanical problems related to surgical simulation, so that mechanical faithfulness is not on par with audiovisual. This lack of fidelity in the reproduction of surgical acts such as cutting may explain why most surgeons who tested existing simulators report that the ""sensation"" fed back to them remains unrealistic. To date, the proposers are not aware of Computational Mechanics solutions addressing, at the same time, geometrical faithfulness, material realism, evolving cuts and quality control of the solution.
The measurable objectives for this research are as follows:
O1:Significantly alleviate the mesh generation and regeneration burden to represent organs’ geometries, underlying tissue microstructure and cuts with sufficient accuracy but minimal user intervention
O2:Move away from simplistic coarse-scale material models by deducing tissue rupture at the organ level from constitutive (e.g. damage) and contact models designed at the meso and micro scales
O3:Ensure real-time results through model order reduction coupled with the multi-scale fracture tools of O2
O4:Control solution accuracy and validate against a range of biomechanics problems including real-life brain surgery interventions with the available at our collaborators’"
Summary
"Surgeons are trained as apprentices. Some conditions are rarely encountered and surgeons will only be trained in the specific skills associated with a given situation if they come across it. At the end of their residency, it is hoped that they will have faced sufficiently many cases to be competent. This can be dangerous to the patients.
If we were able to reproduce faithfully, in a virtual environment, the audio, visual and haptic experience of a surgeon as they prod, pull and incise tissue, then, surgeons would not have to train on cadavers, phantoms, or on the patients themselves.
Only a few researchers in the Computational Mechanics community have attacked the mechanical problems related to surgical simulation, so that mechanical faithfulness is not on par with audiovisual. This lack of fidelity in the reproduction of surgical acts such as cutting may explain why most surgeons who tested existing simulators report that the ""sensation"" fed back to them remains unrealistic. To date, the proposers are not aware of Computational Mechanics solutions addressing, at the same time, geometrical faithfulness, material realism, evolving cuts and quality control of the solution.
The measurable objectives for this research are as follows:
O1:Significantly alleviate the mesh generation and regeneration burden to represent organs’ geometries, underlying tissue microstructure and cuts with sufficient accuracy but minimal user intervention
O2:Move away from simplistic coarse-scale material models by deducing tissue rupture at the organ level from constitutive (e.g. damage) and contact models designed at the meso and micro scales
O3:Ensure real-time results through model order reduction coupled with the multi-scale fracture tools of O2
O4:Control solution accuracy and validate against a range of biomechanics problems including real-life brain surgery interventions with the available at our collaborators’"
Max ERC Funding
1 343 955 €
Duration
Start date: 2012-01-01, End date: 2016-12-31
Project acronym STAMFORD
Project Statistical Methods For High Dimensional Diffusions
Researcher (PI) Mark Podolskij
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Consolidator Grant (CoG), PE1, ERC-2018-COG
Summary In the past twenty years the availability of vast dimensional data, typically referred to as big data, has given rise to exciting challenges in various fields of mathematics and computer sciences. The increasing need for getting a better understanding of such data in internet traffic, biology, genetics, and economics, has lead to a revolution in statistical and machine learning, optimisation and numerical analysis. Due to high dimensionality of modern statistical models, parameter estimation is a difficult task and statisticians typically investigate estimation methods under sparsity constraints. While an abundance of estimation algorithms is now available for high dimensional discrete models, a rigorous mathematical investigation of estimation problems for high dimensional continuous-time processes is completely undeveloped.
The aim of STAMFORD is to provide a concise statistical theory for estimation of high dimensional diffusions. Such high dimensional processes naturally appear in modelling particle interactions in physics, neural networks in biology or large portfolios in economics, just to name a few. The methodological part of the project will require development of novel
advanced techniques in mathematical statistics and probability theory. In particular, new results will be needed in parametric and non-parametric statistics, and high dimensional probability, that are reaching far beyond the state-of-the-art. Hence, a successful outcome of STAMFORD will not only have a tremendous impact on statistical inference for continuous-time models in natural and applied sciences, but also strongly influence the field of high dimensional statistics and probability.
Summary
In the past twenty years the availability of vast dimensional data, typically referred to as big data, has given rise to exciting challenges in various fields of mathematics and computer sciences. The increasing need for getting a better understanding of such data in internet traffic, biology, genetics, and economics, has lead to a revolution in statistical and machine learning, optimisation and numerical analysis. Due to high dimensionality of modern statistical models, parameter estimation is a difficult task and statisticians typically investigate estimation methods under sparsity constraints. While an abundance of estimation algorithms is now available for high dimensional discrete models, a rigorous mathematical investigation of estimation problems for high dimensional continuous-time processes is completely undeveloped.
The aim of STAMFORD is to provide a concise statistical theory for estimation of high dimensional diffusions. Such high dimensional processes naturally appear in modelling particle interactions in physics, neural networks in biology or large portfolios in economics, just to name a few. The methodological part of the project will require development of novel
advanced techniques in mathematical statistics and probability theory. In particular, new results will be needed in parametric and non-parametric statistics, and high dimensional probability, that are reaching far beyond the state-of-the-art. Hence, a successful outcome of STAMFORD will not only have a tremendous impact on statistical inference for continuous-time models in natural and applied sciences, but also strongly influence the field of high dimensional statistics and probability.
Max ERC Funding
1 655 048 €
Duration
Start date: 2019-09-01, End date: 2024-08-31
Project acronym TUNE
Project Testing the Untestable: Model Testing of Complex Software-Intensive Systems
Researcher (PI) Lionel, Claude, Laurent Briand
Host Institution (HI) UNIVERSITE DU LUXEMBOURG
Country Luxembourg
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary Software-intensive systems pervade modern society and industry. These systems often play critical roles from an economic, safety or security standpoint, thus making their dependability indispensible. Software Verification and Validation (V&V) is core to ensuring software dependability. The most prevalent V&V technique is testing, that is the automated, systematic, and controlled execution of a system to detect faults or to show compliance with requirements. Increasingly, we are faced with systems that are untestable, meaning that traditional testing methods are highly expensive, time-consuming or infeasible to apply due to factors such as the systems’ continuous interactions with the environment and the deep intertwining of software with hardware.
TUNE will enable testing of untestable systems by revolutionising how we think about test automation. Our key idea is to frame testing on models rather than operational systems. We refer to such testing as model testing. The models that underlie model testing are executable representations of the relevant aspects of a system and its environment, alongside the risks of system failures. Such models inevitably have uncertainties due to complex, dynamic environment behaviours and the unknowns about the system. This necessitates that model testing be uncertainty-aware.
We propose to develop scalable, practical and uncertainty-aware techniques for test automation, leveraging our expertise on model-driven engineering and automated testing. Our solutions will synergistically combine metaheuristic search with system and risk models to drive the search for critical faults that entail the most risk. TUNE is the first initiative with the specific goal of raising the level of abstraction of testing from operational systems to models. The project will bring early and cost-effective automation to the testing of many critical systems that defy existing automation techniques, thus significantly improving the dependability of such systems.
Summary
Software-intensive systems pervade modern society and industry. These systems often play critical roles from an economic, safety or security standpoint, thus making their dependability indispensible. Software Verification and Validation (V&V) is core to ensuring software dependability. The most prevalent V&V technique is testing, that is the automated, systematic, and controlled execution of a system to detect faults or to show compliance with requirements. Increasingly, we are faced with systems that are untestable, meaning that traditional testing methods are highly expensive, time-consuming or infeasible to apply due to factors such as the systems’ continuous interactions with the environment and the deep intertwining of software with hardware.
TUNE will enable testing of untestable systems by revolutionising how we think about test automation. Our key idea is to frame testing on models rather than operational systems. We refer to such testing as model testing. The models that underlie model testing are executable representations of the relevant aspects of a system and its environment, alongside the risks of system failures. Such models inevitably have uncertainties due to complex, dynamic environment behaviours and the unknowns about the system. This necessitates that model testing be uncertainty-aware.
We propose to develop scalable, practical and uncertainty-aware techniques for test automation, leveraging our expertise on model-driven engineering and automated testing. Our solutions will synergistically combine metaheuristic search with system and risk models to drive the search for critical faults that entail the most risk. TUNE is the first initiative with the specific goal of raising the level of abstraction of testing from operational systems to models. The project will bring early and cost-effective automation to the testing of many critical systems that defy existing automation techniques, thus significantly improving the dependability of such systems.
Max ERC Funding
2 307 932 €
Duration
Start date: 2016-09-01, End date: 2022-02-28