Project acronym 4DRepLy
Project Closing the 4D Real World Reconstruction Loop
Researcher (PI) Christian THEOBALT
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary 4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Summary
4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Max ERC Funding
1 977 000 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym Active-DNA
Project Computationally Active DNA Nanostructures
Researcher (PI) Damien WOODS
Host Institution (HI) NATIONAL UNIVERSITY OF IRELAND MAYNOOTH
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary During the 20th century computer technology evolved from bulky, slow, special purpose mechanical engines to the now ubiquitous silicon chips and software that are one of the pinnacles of human ingenuity. The goal of the field of molecular programming is to take the next leap and build a new generation of matter-based computers using DNA, RNA and proteins. This will be accomplished by computer scientists, physicists and chemists designing molecules to execute ``wet'' nanoscale programs in test tubes. The workflow includes proposing theoretical models, mathematically proving their computational properties, physical modelling and implementation in the wet-lab.
The past decade has seen remarkable progress at building static 2D and 3D DNA nanostructures. However, unlike biological macromolecules and complexes that are built via specified self-assembly pathways, that execute robotic-like movements, and that undergo evolution, the activity of human-engineered nanostructures is severely limited. We will need sophisticated algorithmic ideas to build structures that rival active living systems. Active-DNA, aims to address this challenge by achieving a number of objectives on computation, DNA-based self-assembly and molecular robotics. Active-DNA research work will range from defining models and proving theorems that characterise the computational and expressive capabilities of such active programmable materials to experimental work implementing active DNA nanostructures in the wet-lab.
Summary
During the 20th century computer technology evolved from bulky, slow, special purpose mechanical engines to the now ubiquitous silicon chips and software that are one of the pinnacles of human ingenuity. The goal of the field of molecular programming is to take the next leap and build a new generation of matter-based computers using DNA, RNA and proteins. This will be accomplished by computer scientists, physicists and chemists designing molecules to execute ``wet'' nanoscale programs in test tubes. The workflow includes proposing theoretical models, mathematically proving their computational properties, physical modelling and implementation in the wet-lab.
The past decade has seen remarkable progress at building static 2D and 3D DNA nanostructures. However, unlike biological macromolecules and complexes that are built via specified self-assembly pathways, that execute robotic-like movements, and that undergo evolution, the activity of human-engineered nanostructures is severely limited. We will need sophisticated algorithmic ideas to build structures that rival active living systems. Active-DNA, aims to address this challenge by achieving a number of objectives on computation, DNA-based self-assembly and molecular robotics. Active-DNA research work will range from defining models and proving theorems that characterise the computational and expressive capabilities of such active programmable materials to experimental work implementing active DNA nanostructures in the wet-lab.
Max ERC Funding
2 349 603 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym ACUITY
Project Algorithms for coping with uncertainty and intractability
Researcher (PI) Nikhil Bansal
Host Institution (HI) TECHNISCHE UNIVERSITEIT EINDHOVEN
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary The two biggest challenges in solving practical optimization problems are computational intractability, and the presence
of uncertainty: most problems are either NP-hard, or have incomplete input data which
makes an exact computation impossible.
Recently, there has been a huge progress in our understanding of intractability, based on spectacular algorithmic and lower bound techniques. For several problems, especially those with only local constraints, we can design optimum
approximation algorithms that are provably the best possible.
However, typical optimization problems usually involve complex global constraints and are much less understood. The situation is even worse for coping with uncertainty. Most of the algorithms are based on ad-hoc techniques and there is no deeper understanding of what makes various problems easy or hard.
This proposal describes several new directions, together with concrete intermediate goals, that will break important new ground in the theory of approximation and online algorithms. The particular directions we consider are (i) extend the primal dual method to systematically design online algorithms, (ii) build a structural theory of online problems based on work functions, (iii) develop new tools to use the power of strong convex relaxations and (iv) design new algorithmic approaches based on non-constructive proof techniques.
The proposed research is at the
cutting edge of algorithm design, and builds upon the recent success of the PI in resolving several longstanding questions in these areas. Any progress is likely to be a significant contribution to theoretical
computer science and combinatorial optimization.
Summary
The two biggest challenges in solving practical optimization problems are computational intractability, and the presence
of uncertainty: most problems are either NP-hard, or have incomplete input data which
makes an exact computation impossible.
Recently, there has been a huge progress in our understanding of intractability, based on spectacular algorithmic and lower bound techniques. For several problems, especially those with only local constraints, we can design optimum
approximation algorithms that are provably the best possible.
However, typical optimization problems usually involve complex global constraints and are much less understood. The situation is even worse for coping with uncertainty. Most of the algorithms are based on ad-hoc techniques and there is no deeper understanding of what makes various problems easy or hard.
This proposal describes several new directions, together with concrete intermediate goals, that will break important new ground in the theory of approximation and online algorithms. The particular directions we consider are (i) extend the primal dual method to systematically design online algorithms, (ii) build a structural theory of online problems based on work functions, (iii) develop new tools to use the power of strong convex relaxations and (iv) design new algorithmic approaches based on non-constructive proof techniques.
The proposed research is at the
cutting edge of algorithm design, and builds upon the recent success of the PI in resolving several longstanding questions in these areas. Any progress is likely to be a significant contribution to theoretical
computer science and combinatorial optimization.
Max ERC Funding
1 519 285 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym ALUNIF
Project Algorithms and Lower Bounds: A Unified Approach
Researcher (PI) Rahul Santhanam
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.
Summary
One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.
Max ERC Funding
1 274 496 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym Amitochondriates
Project Life without mitochondrion
Researcher (PI) Vladimir HAMPL
Host Institution (HI) UNIVERZITA KARLOVA
Call Details Consolidator Grant (CoG), LS8, ERC-2017-COG
Summary Mitochondria are often referred to as the “power houses” of eukaryotic cells. All eukaryotes were thought to have mitochondria of some form until 2016, when the first eukaryote thriving without mitochondria was discovered by our laboratory – a flagellate Monocercomonoides. Understanding cellular functions of these cells, which represent a new functional type of eukaryotes, and understanding the circumstances of the unique event of mitochondrial loss are motivations for this proposal. The first objective focuses on the cell physiology. We will perform a metabolomic study revealing major metabolic pathways and concentrate further on elucidating its unique system of iron-sulphur cluster assembly. In the second objective, we will investigate in details the unique case of mitochondrial loss. We will examine two additional potentially amitochondriate lineages by means of genomics and transcriptomics, conduct experiments simulating the moments of mitochondrial loss and try to induce the mitochondrial loss in vitro by knocking out or down genes for mitochondrial biogenesis. We have chosen Giardia intestinalis and Entamoeba histolytica as models for the latter experiments, because their mitochondria are already reduced to minimalistic “mitosomes” and because some genetic tools are already available for them. Successful mitochondrial knock-outs would enable us to study mitochondrial loss in ‘real time’ and in vivo. In the third objective, we will focus on transforming Monocercomonoides into a tractable laboratory model by developing methods of axenic cultivation and genetic manipulation. This will open new possibilities in the studies of this organism and create a cell culture representing an amitochondriate model for cell biological studies enabling the dissection of mitochondrial effects from those of other compartments. The team is composed of the laboratory of PI and eight invited experts and we hope it has the ability to address these challenging questions.
Summary
Mitochondria are often referred to as the “power houses” of eukaryotic cells. All eukaryotes were thought to have mitochondria of some form until 2016, when the first eukaryote thriving without mitochondria was discovered by our laboratory – a flagellate Monocercomonoides. Understanding cellular functions of these cells, which represent a new functional type of eukaryotes, and understanding the circumstances of the unique event of mitochondrial loss are motivations for this proposal. The first objective focuses on the cell physiology. We will perform a metabolomic study revealing major metabolic pathways and concentrate further on elucidating its unique system of iron-sulphur cluster assembly. In the second objective, we will investigate in details the unique case of mitochondrial loss. We will examine two additional potentially amitochondriate lineages by means of genomics and transcriptomics, conduct experiments simulating the moments of mitochondrial loss and try to induce the mitochondrial loss in vitro by knocking out or down genes for mitochondrial biogenesis. We have chosen Giardia intestinalis and Entamoeba histolytica as models for the latter experiments, because their mitochondria are already reduced to minimalistic “mitosomes” and because some genetic tools are already available for them. Successful mitochondrial knock-outs would enable us to study mitochondrial loss in ‘real time’ and in vivo. In the third objective, we will focus on transforming Monocercomonoides into a tractable laboratory model by developing methods of axenic cultivation and genetic manipulation. This will open new possibilities in the studies of this organism and create a cell culture representing an amitochondriate model for cell biological studies enabling the dissection of mitochondrial effects from those of other compartments. The team is composed of the laboratory of PI and eight invited experts and we hope it has the ability to address these challenging questions.
Max ERC Funding
1 935 500 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym ANTSolve
Project A multi-scale perspective into collective problem solving in ants
Researcher (PI) Ofer Feinerman
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), LS8, ERC-2017-COG
Summary Cognition improves an animal’s ability to tune its responses to environmental conditions. In group living animals, communication works to form a collective cognition that expands the group’s abilities beyond those of individuals. Despite much research, to date, there is little understanding of how collective cognition emerges within biological ensembles. A major obstacle towards such an understanding is the rarity of comprehensive multi-scale empirical data of these complex systems.
We have demonstrated cooperative load transport by ants to be an ideal system to study the emergence of cognition. Similar to other complex cognitive systems, the ants employ high levels of emergence to achieve efficient problem solving over a large range of scenarios. Unique to this system, is its extreme amenability to experimental measurement and manipulation where internal conflicts map to forces, abstract decision making is reflected in direction changes, and future planning manifested in pheromone trails. This allows for an unprecedentedly detailed, multi-scale empirical description of the moment-to-moment unfolding of sophisticated cognitive processes.
This proposal is aimed at materializing this potential to the full. We will examine the ants’ problem solving capabilities under a variety of environmental challenges. We will expose the underpinning rules on the different organizational scales, the flow of information between them, and their relative contributions to collective performance. This will allow for empirical comparisons between the ‘group’ and the ‘sum of its parts’ from which we will quantify the level of emergence in this system. Using the language of information, we will map the boundaries of this group’s collective cognition and relate them to the range of habitable environmental niches. Moreover, we will generalize these insights to formulate a new paradigm of emergence in biological groups opening new horizons in the study of cognitive processes in general.
Summary
Cognition improves an animal’s ability to tune its responses to environmental conditions. In group living animals, communication works to form a collective cognition that expands the group’s abilities beyond those of individuals. Despite much research, to date, there is little understanding of how collective cognition emerges within biological ensembles. A major obstacle towards such an understanding is the rarity of comprehensive multi-scale empirical data of these complex systems.
We have demonstrated cooperative load transport by ants to be an ideal system to study the emergence of cognition. Similar to other complex cognitive systems, the ants employ high levels of emergence to achieve efficient problem solving over a large range of scenarios. Unique to this system, is its extreme amenability to experimental measurement and manipulation where internal conflicts map to forces, abstract decision making is reflected in direction changes, and future planning manifested in pheromone trails. This allows for an unprecedentedly detailed, multi-scale empirical description of the moment-to-moment unfolding of sophisticated cognitive processes.
This proposal is aimed at materializing this potential to the full. We will examine the ants’ problem solving capabilities under a variety of environmental challenges. We will expose the underpinning rules on the different organizational scales, the flow of information between them, and their relative contributions to collective performance. This will allow for empirical comparisons between the ‘group’ and the ‘sum of its parts’ from which we will quantify the level of emergence in this system. Using the language of information, we will map the boundaries of this group’s collective cognition and relate them to the range of habitable environmental niches. Moreover, we will generalize these insights to formulate a new paradigm of emergence in biological groups opening new horizons in the study of cognitive processes in general.
Max ERC Funding
2 000 000 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym BIGBAYES
Project Rich, Structured and Efficient Learning of Big Bayesian Models
Researcher (PI) Yee Whye Teh
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary As datasets grow ever larger in scale, complexity and variety, there is an increasing need for powerful machine learning and statistical techniques that are capable of learning from such data. Bayesian nonparametrics is a promising approach to data analysis that is increasingly popular in machine learning and statistics. Bayesian nonparametric models are highly flexible models with infinite-dimensional parameter spaces that can be used to directly parameterise and learn about functions, densities, conditional distributions etc, and have been successfully applied to regression, survival analysis, language modelling, time series analysis, and visual scene analysis among others. However, to successfully use Bayesian nonparametric models to analyse the high-dimensional and structured datasets now commonly encountered in the age of Big Data, we will have to overcome a number of challenges. Namely, we need to develop Bayesian nonparametric models that can learn rich representations from structured data, and we need computational methodologies that can scale effectively to the large and complex models of the future. We will ground our developments in relevant applications, particularly to natural language processing (learning distributed representations for language modelling and compositional semantics) and genetics (modelling genetic variations arising from population, genealogical and spatial structures).
Summary
As datasets grow ever larger in scale, complexity and variety, there is an increasing need for powerful machine learning and statistical techniques that are capable of learning from such data. Bayesian nonparametrics is a promising approach to data analysis that is increasingly popular in machine learning and statistics. Bayesian nonparametric models are highly flexible models with infinite-dimensional parameter spaces that can be used to directly parameterise and learn about functions, densities, conditional distributions etc, and have been successfully applied to regression, survival analysis, language modelling, time series analysis, and visual scene analysis among others. However, to successfully use Bayesian nonparametric models to analyse the high-dimensional and structured datasets now commonly encountered in the age of Big Data, we will have to overcome a number of challenges. Namely, we need to develop Bayesian nonparametric models that can learn rich representations from structured data, and we need computational methodologies that can scale effectively to the large and complex models of the future. We will ground our developments in relevant applications, particularly to natural language processing (learning distributed representations for language modelling and compositional semantics) and genetics (modelling genetic variations arising from population, genealogical and spatial structures).
Max ERC Funding
1 918 092 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym BITCRUMBS
Project Towards a Reliable and Automated Analysis of Compromised Systems
Researcher (PI) Davide BALZAROTTI
Host Institution (HI) EURECOM
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary "The vast majority of research in computer security is dedicated to the design of detection, protection, and prevention solutions. While these techniques play a critical role to increase the security and privacy of our digital infrastructure, it is enough to look at the news to understand that it is not a matter of ""if"" a computer system will be compromised, but only a matter of ""when"". It is a well known fact that there is no 100% secure system, and that there is no practical way to prevent attackers with enough resources from breaking into sensitive targets. Therefore, it is extremely important to develop automated techniques to timely and precisely analyze computer security incidents and compromised systems. Unfortunately, the area of incident response received very little research attention, and it is still largely considered an art more than a science because of its lack of a proper theoretical and scientific background.
The objective of BITCRUMBS is to rethink the Incident Response (IR) field from its foundations by proposing a more scientific and comprehensive approach to the analysis of compromised systems. BITCRUMBS will achieve this goal in three steps: (1) by introducing a new systematic approach to precisely measure the effectiveness and accuracy of IR techniques and their resilience to evasion and forgery; (2) by designing and implementing new automated techniques to cope with advanced threats and the analysis of IoT devices; and (3) by proposing a novel forensics-by-design development methodology and a set of guidelines for the design of future systems and software.
To provide the right context for these new techniques and show the impact of the project in different fields and scenarios, BITCRUMBS plans to address its objectives using real case studies borrowed from two different
domains: traditional computer software, and embedded systems.
"
Summary
"The vast majority of research in computer security is dedicated to the design of detection, protection, and prevention solutions. While these techniques play a critical role to increase the security and privacy of our digital infrastructure, it is enough to look at the news to understand that it is not a matter of ""if"" a computer system will be compromised, but only a matter of ""when"". It is a well known fact that there is no 100% secure system, and that there is no practical way to prevent attackers with enough resources from breaking into sensitive targets. Therefore, it is extremely important to develop automated techniques to timely and precisely analyze computer security incidents and compromised systems. Unfortunately, the area of incident response received very little research attention, and it is still largely considered an art more than a science because of its lack of a proper theoretical and scientific background.
The objective of BITCRUMBS is to rethink the Incident Response (IR) field from its foundations by proposing a more scientific and comprehensive approach to the analysis of compromised systems. BITCRUMBS will achieve this goal in three steps: (1) by introducing a new systematic approach to precisely measure the effectiveness and accuracy of IR techniques and their resilience to evasion and forgery; (2) by designing and implementing new automated techniques to cope with advanced threats and the analysis of IoT devices; and (3) by proposing a novel forensics-by-design development methodology and a set of guidelines for the design of future systems and software.
To provide the right context for these new techniques and show the impact of the project in different fields and scenarios, BITCRUMBS plans to address its objectives using real case studies borrowed from two different
domains: traditional computer software, and embedded systems.
"
Max ERC Funding
1 991 504 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym Browsec
Project Foundations and Tools for Client-Side Web Security
Researcher (PI) Matteo MAFFEI
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary The constantly increasing number of attacks on web applications shows how their rapid development has not been accompanied by adequate security foundations and demonstrates the lack of solid security enforcement tools. Indeed, web applications expose a gigantic attack surface, which hinders a rigorous understanding and enforcement of security properties. Hence, despite the worthwhile efforts to design secure web applications, users for a while will be confronted with vulnerable, or maliciously crafted, code. Unfortunately, end users have no way at present to reliably protect themselves from malicious applications.
BROWSEC will develop a holistic approach to client-side web security, laying its theoretical foundations and developing innovative security enforcement technologies. In particular, BROWSEC will deliver the first client-side tool to secure web applications that is practical, in that it is implemented as an extension and can thus be easily deployed at large, and also provably sound, i.e., backed up by machine-checked proofs that the tool provides end users with the required security guarantees. At the core of the proposal lies a novel monitoring technique, which treats the browser as a blackbox and intercepts its inputs and outputs in order to prevent dangerous information flows. With this lightweight monitoring approach, we aim at enforcing strong security properties without requiring any expensive and, given the dynamic nature of web applications, statically infeasible program analysis.
BROWSEC is thus a multidisciplinary research effort, promising practical impact and delivering breakthrough advancements in various disciplines, such as web security, JavaScript semantics, software engineering, and program verification.
Summary
The constantly increasing number of attacks on web applications shows how their rapid development has not been accompanied by adequate security foundations and demonstrates the lack of solid security enforcement tools. Indeed, web applications expose a gigantic attack surface, which hinders a rigorous understanding and enforcement of security properties. Hence, despite the worthwhile efforts to design secure web applications, users for a while will be confronted with vulnerable, or maliciously crafted, code. Unfortunately, end users have no way at present to reliably protect themselves from malicious applications.
BROWSEC will develop a holistic approach to client-side web security, laying its theoretical foundations and developing innovative security enforcement technologies. In particular, BROWSEC will deliver the first client-side tool to secure web applications that is practical, in that it is implemented as an extension and can thus be easily deployed at large, and also provably sound, i.e., backed up by machine-checked proofs that the tool provides end users with the required security guarantees. At the core of the proposal lies a novel monitoring technique, which treats the browser as a blackbox and intercepts its inputs and outputs in order to prevent dangerous information flows. With this lightweight monitoring approach, we aim at enforcing strong security properties without requiring any expensive and, given the dynamic nature of web applications, statically infeasible program analysis.
BROWSEC is thus a multidisciplinary research effort, promising practical impact and delivering breakthrough advancements in various disciplines, such as web security, JavaScript semantics, software engineering, and program verification.
Max ERC Funding
1 990 000 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym CAUSALPATH
Project Next Generation Causal Analysis: Inspired by the Induction of Biological Pathways from Cytometry Data
Researcher (PI) Ioannis Tsamardinos
Host Institution (HI) PANEPISTIMIO KRITIS
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary Discovering the causal mechanisms of a complex system of interacting components is necessary in order to control it. Computational Causal Discovery (CD) is a field that offers the potential to discover causal relations under certain conditions from observational data alone or with a limited number of interventions/manipulations.
An important, challenging biological problem that may take decades of experimental work is the induction of biological cellular pathways; pathways are informal causal models indispensable in biological research and drug design. Recent exciting advances in flow/mass cytometry biotechnology allow the generation of large-sample datasets containing measurements on single cells, thus setting the problem of pathway learning suitable for CD methods.
CAUSALPATH builds upon and further advances recent breakthrough developments in CD methods to enable the induction of biological pathways from cytometry and other omics data. As a testbed problem we focus on the differentiation of human T-cells; these are involved in autoimmune and inflammatory diseases, as well as cancer and thus, are targets of new drug development for a range of chronic diseases. The biological problem acts as our campus for general novel formalisms, practical algorithms, and useful tools development, pointing to fundamental CD problems: presence of feedback cycles, presence of latent confounding variables, CD from time-course data, Integrative Causal Analysis (INCA) of heterogeneous datasets and others.
Three features complement CAUSALPATH’s approach: (A) methods development will co-evolve with biological wet-lab experiments periodically testing the algorithmic postulates, (B) Open-source tools will be developed for the non-expert, and (C) Commercial exploitation of the results will be sought out.
CAUSALPATH brings together an interdisciplinary team, committed to this vision. It builds upon the PI’s group recent important results on INCA algorithms.
Summary
Discovering the causal mechanisms of a complex system of interacting components is necessary in order to control it. Computational Causal Discovery (CD) is a field that offers the potential to discover causal relations under certain conditions from observational data alone or with a limited number of interventions/manipulations.
An important, challenging biological problem that may take decades of experimental work is the induction of biological cellular pathways; pathways are informal causal models indispensable in biological research and drug design. Recent exciting advances in flow/mass cytometry biotechnology allow the generation of large-sample datasets containing measurements on single cells, thus setting the problem of pathway learning suitable for CD methods.
CAUSALPATH builds upon and further advances recent breakthrough developments in CD methods to enable the induction of biological pathways from cytometry and other omics data. As a testbed problem we focus on the differentiation of human T-cells; these are involved in autoimmune and inflammatory diseases, as well as cancer and thus, are targets of new drug development for a range of chronic diseases. The biological problem acts as our campus for general novel formalisms, practical algorithms, and useful tools development, pointing to fundamental CD problems: presence of feedback cycles, presence of latent confounding variables, CD from time-course data, Integrative Causal Analysis (INCA) of heterogeneous datasets and others.
Three features complement CAUSALPATH’s approach: (A) methods development will co-evolve with biological wet-lab experiments periodically testing the algorithmic postulates, (B) Open-source tools will be developed for the non-expert, and (C) Commercial exploitation of the results will be sought out.
CAUSALPATH brings together an interdisciplinary team, committed to this vision. It builds upon the PI’s group recent important results on INCA algorithms.
Max ERC Funding
1 724 000 €
Duration
Start date: 2015-01-01, End date: 2019-12-31