Project acronym Big Splash
Project Big Splash: Efficient Simulation of Natural Phenomena at Extremely Large Scales
Researcher (PI) Christopher John Wojtan
Host Institution (HI) Institute of Science and Technology Austria
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Summary
Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Max ERC Funding
1 500 000 €
Duration
Start date: 2015-03-01, End date: 2020-02-29
Project acronym COMPLEX REASON
Project The Parameterized Complexity of Reasoning Problems
Researcher (PI) Stefan Szeider
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Summary
Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Max ERC Funding
1 421 130 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym CUTACOMBS
Project Cuts and decompositions: algorithms and combinatorial properties
Researcher (PI) Marcin PILIPCZUK
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary In this proposal we plan to extend mathematical foundations of algorithms for various variants of the minimum cut problem within theoretical computer science.
Recent advances in understanding the structure of small cuts and tractability of cut problems resulted in a mature algorithmic toolbox for undirected graphs under the paradigm of parameterized complexity. In this position, we now aim at a full understanding of the tractability of cut problems in the more challenging case of directed graphs, and see opportunities to apply the aforementioned successful structural approach to advance on major open problems in other paradigms in theoretical computer science.
The specific goals of the project are grouped in the following three themes.
Directed graphs. Chart the parameterized complexity of graph separation problems in directed graphs and provide a fixed-parameter tractability toolbox, equally deep as the one in undirected graphs. Provide tractability foundations for routing problems in directed graphs, such as the disjoint paths problem with symmetric demands.
Planar graphs. Resolve main open problems with respect to network design and graph separation problems in planar graphs under the following three paradigms: parameterized complexity, approximation schemes, and cut/flow/distance sparsifiers. Recently discovered connections uncover significant potential in synergy between these three algorithmic approaches.
Tree decompositions. Show improved tractability of graph isomorphism testing in sparse graph classes. Combine the algorithmic toolbox of parameterized complexity with the theory of minimal triangulations to advance our knowledge in structural graph theory, both pure (focused on the Erdos-Hajnal conjecture) and algorithmic (focused on the tractability of Maximum Independent Set and 3-Coloring).
Summary
In this proposal we plan to extend mathematical foundations of algorithms for various variants of the minimum cut problem within theoretical computer science.
Recent advances in understanding the structure of small cuts and tractability of cut problems resulted in a mature algorithmic toolbox for undirected graphs under the paradigm of parameterized complexity. In this position, we now aim at a full understanding of the tractability of cut problems in the more challenging case of directed graphs, and see opportunities to apply the aforementioned successful structural approach to advance on major open problems in other paradigms in theoretical computer science.
The specific goals of the project are grouped in the following three themes.
Directed graphs. Chart the parameterized complexity of graph separation problems in directed graphs and provide a fixed-parameter tractability toolbox, equally deep as the one in undirected graphs. Provide tractability foundations for routing problems in directed graphs, such as the disjoint paths problem with symmetric demands.
Planar graphs. Resolve main open problems with respect to network design and graph separation problems in planar graphs under the following three paradigms: parameterized complexity, approximation schemes, and cut/flow/distance sparsifiers. Recently discovered connections uncover significant potential in synergy between these three algorithmic approaches.
Tree decompositions. Show improved tractability of graph isomorphism testing in sparse graph classes. Combine the algorithmic toolbox of parameterized complexity with the theory of minimal triangulations to advance our knowledge in structural graph theory, both pure (focused on the Erdos-Hajnal conjecture) and algorithmic (focused on the tractability of Maximum Independent Set and 3-Coloring).
Max ERC Funding
1 228 250 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym HOMOVIS
Project High-level Prior Models for Computer Vision
Researcher (PI) Thomas Pock
Host Institution (HI) TECHNISCHE UNIVERSITAET GRAZ
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Since more than 50 years, computer vision has been a very active research field but it is still far away from the abilities of the human visual system. This stunning performance of the human visual system can be mainly contributed to a highly efficient three-layer architecture: A low-level layer that sparsifies the visual information by detecting important image features such as image gradients, a mid-level layer that implements disocclusion and boundary completion processes and finally a high-level layer that is concerned with the recognition of objects.
Variational methods are certainly one of the most successful methods for low-level vision. However, it is very unlikely that these methods can be further improved without the integration of high-level prior models. Therefore, we propose a unified mathematical framework that allows for a natural integration of high-level priors into low-level variational models. In particular, we propose to represent images in a higher-dimensional space which is inspired by the architecture for the visual cortex. This space performs a decomposition of the image gradients into magnitude and direction and hence performs a lifting of the 2D image to a 3D space. This has several advantages: Firstly, the higher-dimensional embedding allows to implement mid-level tasks such as boundary completion and disocclusion processes in a very natural way. Secondly, the lifted space allows for an explicit access to the orientation and the magnitude of image gradients. In turn, distributions of gradient orientations – known to be highly effective for object detection – can be utilized as high-level priors. This inverts the bottom-up nature of object detectors and hence adds an efficient top-down process to low-level variational models.
The developed mathematical approaches will go significantly beyond traditional variational models for computer vision and hence will define a new state-of-the-art in the field.
Summary
Since more than 50 years, computer vision has been a very active research field but it is still far away from the abilities of the human visual system. This stunning performance of the human visual system can be mainly contributed to a highly efficient three-layer architecture: A low-level layer that sparsifies the visual information by detecting important image features such as image gradients, a mid-level layer that implements disocclusion and boundary completion processes and finally a high-level layer that is concerned with the recognition of objects.
Variational methods are certainly one of the most successful methods for low-level vision. However, it is very unlikely that these methods can be further improved without the integration of high-level prior models. Therefore, we propose a unified mathematical framework that allows for a natural integration of high-level priors into low-level variational models. In particular, we propose to represent images in a higher-dimensional space which is inspired by the architecture for the visual cortex. This space performs a decomposition of the image gradients into magnitude and direction and hence performs a lifting of the 2D image to a 3D space. This has several advantages: Firstly, the higher-dimensional embedding allows to implement mid-level tasks such as boundary completion and disocclusion processes in a very natural way. Secondly, the lifted space allows for an explicit access to the orientation and the magnitude of image gradients. In turn, distributions of gradient orientations – known to be highly effective for object detection – can be utilized as high-level priors. This inverts the bottom-up nature of object detectors and hence adds an efficient top-down process to low-level variational models.
The developed mathematical approaches will go significantly beyond traditional variational models for computer vision and hence will define a new state-of-the-art in the field.
Max ERC Funding
1 473 525 €
Duration
Start date: 2015-06-01, End date: 2020-05-31
Project acronym MATERIALIZABLE
Project MATERIALIZABLE: Intelligent fabrication-oriented Computational Design and Modeling
Researcher (PI) Bernd BICKEL
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary While access to 3D-printing technology becomes ubiquitous and provides revolutionary possibilities for fabricating complex, functional, multi-material objects with stunning properties, its potential impact is currently significantly limited due to the lack of efficient and intuitive methods for content creation. Existing tools are usually restricted to expert users, have been developed based on the capabilities of traditional manufacturing processes, and do not sufficiently take fabrication constraints into account. Scientifically, we are facing the fundamental challenge that existing simulation techniques and design approaches for predicting the physical properties of materials and objects at the resolution of modern 3D printers are too slow and do not scale with increasing object complexity. The problem is extremely challenging because real world-materials exhibit extraordinary variety and complexity.
To address these challenges, I suggest a novel computational approach that facilitates intuitive design, accurate and fast simulation techniques, and a functional representation of 3D content. I propose a multi-scale representation of functional goals and hybrid models that describes the physical behavior at a coarse scale and the relationship to the underlying material composition at the resolution of the 3D printer. My approach is to combine data-driven and physically-based modeling, providing both the required speed and accuracy through smart precomputations and tailored simulation techniques that operate on the data. A key aspect of this modeling and simulation approach is to identify domains that are sufficiently low-dimensional to be correctly sampled. Subsequently, I propose the fundamental re-thinking of the workflow, leading to solutions that allow synthesizing model instances optimized on-the-fly for a specific output device. The principal applicability will be evaluated for functional goals, such as appearance, deformation, and sensing capabilities.
Summary
While access to 3D-printing technology becomes ubiquitous and provides revolutionary possibilities for fabricating complex, functional, multi-material objects with stunning properties, its potential impact is currently significantly limited due to the lack of efficient and intuitive methods for content creation. Existing tools are usually restricted to expert users, have been developed based on the capabilities of traditional manufacturing processes, and do not sufficiently take fabrication constraints into account. Scientifically, we are facing the fundamental challenge that existing simulation techniques and design approaches for predicting the physical properties of materials and objects at the resolution of modern 3D printers are too slow and do not scale with increasing object complexity. The problem is extremely challenging because real world-materials exhibit extraordinary variety and complexity.
To address these challenges, I suggest a novel computational approach that facilitates intuitive design, accurate and fast simulation techniques, and a functional representation of 3D content. I propose a multi-scale representation of functional goals and hybrid models that describes the physical behavior at a coarse scale and the relationship to the underlying material composition at the resolution of the 3D printer. My approach is to combine data-driven and physically-based modeling, providing both the required speed and accuracy through smart precomputations and tailored simulation techniques that operate on the data. A key aspect of this modeling and simulation approach is to identify domains that are sufficiently low-dimensional to be correctly sampled. Subsequently, I propose the fundamental re-thinking of the workflow, leading to solutions that allow synthesizing model instances optimized on-the-fly for a specific output device. The principal applicability will be evaluated for functional goals, such as appearance, deformation, and sensing capabilities.
Max ERC Funding
1 497 730 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym RoboExNovo
Project Robots learning about objects from externalized knowledge sources
Researcher (PI) Barbara Caputo
Host Institution (HI) FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary While today’s robots are able to perform sophisticated tasks, they can only act on objects they have been trained to recognize. This is a severe limitation: any robot will inevitably face novel situations in unconstrained settings, and thus will always have knowledge gaps. This calls for robots able to learn continuously about objects by themselves. The learning paradigm of state-of-the-art robots is the sensorimotor toil, i.e. the process of acquiring knowledge by generalization over observed stimuli. This is in line with cognitive theories that claim that cognition is embodied and situated, so that all knowledge acquired by a robot is specific to its sensorimotor capabilities and to the situation in which it has been acquired. Still, humans are also capable of learning from externalized sources – like books, illustrations, etc – containing knowledge that is necessarily unembodied and unsituated. To overcome this gap, RoboExNovo proposes a paradigm shift. I will develop a new generation of robots able to acquire perceptual and semantic knowledge about object from externalized, unembodied resources, to be used in situated settings. As the largest existing body of externalized knowledge, I will consider the Web as the source from which to learn from. To achieve this, I propose to build a translation framework between the representations used by robots in their situated experience and those used on the Web, based on relational structures establishing links between related percepts and between percepts and the semantics they support. My leading expertise in machine learning applied to multi modal data and robot vision puts me in a strong position to realize this project. By enabling robots to use knowledge resources on the Web that were not explicitly designed to be accessed for this purpose, RoboExNovo will pave the way for ground-breaking technological advances in home and service robotics, driver assistant systems, and in general any Web-connected situated device.
Summary
While today’s robots are able to perform sophisticated tasks, they can only act on objects they have been trained to recognize. This is a severe limitation: any robot will inevitably face novel situations in unconstrained settings, and thus will always have knowledge gaps. This calls for robots able to learn continuously about objects by themselves. The learning paradigm of state-of-the-art robots is the sensorimotor toil, i.e. the process of acquiring knowledge by generalization over observed stimuli. This is in line with cognitive theories that claim that cognition is embodied and situated, so that all knowledge acquired by a robot is specific to its sensorimotor capabilities and to the situation in which it has been acquired. Still, humans are also capable of learning from externalized sources – like books, illustrations, etc – containing knowledge that is necessarily unembodied and unsituated. To overcome this gap, RoboExNovo proposes a paradigm shift. I will develop a new generation of robots able to acquire perceptual and semantic knowledge about object from externalized, unembodied resources, to be used in situated settings. As the largest existing body of externalized knowledge, I will consider the Web as the source from which to learn from. To achieve this, I propose to build a translation framework between the representations used by robots in their situated experience and those used on the Web, based on relational structures establishing links between related percepts and between percepts and the semantics they support. My leading expertise in machine learning applied to multi modal data and robot vision puts me in a strong position to realize this project. By enabling robots to use knowledge resources on the Web that were not explicitly designed to be accessed for this purpose, RoboExNovo will pave the way for ground-breaking technological advances in home and service robotics, driver assistant systems, and in general any Web-connected situated device.
Max ERC Funding
1 496 277 €
Duration
Start date: 2015-06-01, End date: 2020-05-31
Project acronym SMART
Project Strong Modular proof Assistance: Reasoning across Theories
Researcher (PI) Cezary Seweryn KALISZYK
Host Institution (HI) UNIVERSITAET INNSBRUCK
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary Formal proof technology delivers an unparalleled level of certainty and security. Nevertheless, applying proof assistants to the verification of complex theories and designs is still extremely laborious. High profile certification projects, such as seL4, CompCert, and Flyspeck require tens of person-years. We recently demonstrated that this effort can be significantly reduced by combining reasoning and learning in so called hammer systems: 40% of the Flyspeck, HOL4, Isabelle/HOL, and Mizar top-level lemmas can be proved automatically.
Today's early generation of hammers consists of individual systems limited to very few proof assistants. The accessible knowledge repositories are isolated, and there is no reuse of hammer components.
It is possible to achieve a breakthrough in proof automation by developing new AI methods that combine reasoning knowledge and techniques into a smart hammer, that works over a very large part of today's formalized knowledge. The main goal of the project is to develop a strong and uniform learning-reasoning system available for multiple logical foundations. To achieve this, we will develop: (a) uniform learning methods, (b) reusable ATP encoding components for different foundational aspects, (c) integration of proof reconstruction, and (d) methods for knowledge extraction, reuse and content merging. The single proof advice system will be made available for multiple proof assistants and their vast heterogeneous libraries.
The ultimate outcome is an advice system able to automatically prove half of Coq, ACL2, and Isabelle/ZF top-level theorems. Additionally we will significantly improve success rates for HOL provers and Mizar. The combined smart advice method together with the vast accumulated knowledge will result in a novel kind of tool, which allows working mathematicians to automatically find proofs of many simple conjectures, paving the way for the widespread use of formal proof in mathematics and computer science.
Summary
Formal proof technology delivers an unparalleled level of certainty and security. Nevertheless, applying proof assistants to the verification of complex theories and designs is still extremely laborious. High profile certification projects, such as seL4, CompCert, and Flyspeck require tens of person-years. We recently demonstrated that this effort can be significantly reduced by combining reasoning and learning in so called hammer systems: 40% of the Flyspeck, HOL4, Isabelle/HOL, and Mizar top-level lemmas can be proved automatically.
Today's early generation of hammers consists of individual systems limited to very few proof assistants. The accessible knowledge repositories are isolated, and there is no reuse of hammer components.
It is possible to achieve a breakthrough in proof automation by developing new AI methods that combine reasoning knowledge and techniques into a smart hammer, that works over a very large part of today's formalized knowledge. The main goal of the project is to develop a strong and uniform learning-reasoning system available for multiple logical foundations. To achieve this, we will develop: (a) uniform learning methods, (b) reusable ATP encoding components for different foundational aspects, (c) integration of proof reconstruction, and (d) methods for knowledge extraction, reuse and content merging. The single proof advice system will be made available for multiple proof assistants and their vast heterogeneous libraries.
The ultimate outcome is an advice system able to automatically prove half of Coq, ACL2, and Isabelle/ZF top-level theorems. Additionally we will significantly improve success rates for HOL provers and Mizar. The combined smart advice method together with the vast accumulated knowledge will result in a novel kind of tool, which allows working mathematicians to automatically find proofs of many simple conjectures, paving the way for the widespread use of formal proof in mathematics and computer science.
Max ERC Funding
1 449 000 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym SOSNA
Project Expressive Power of Tree Logics
Researcher (PI) Mikolaj Bojanczyk
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary Logics for expressing properties of labeled trees and forests figure importantly in several different areas of Computer Science, including verification (branching temporal logics) and database theory (many XML query languages). The goal of this project is to investigate the expressive power of tree logics, mainly those logics that can be captured by tree automata. A similar study, but for word languages, is one of the main lines of research in formal language theory. The study of the expressive power of word logics has lead to many beautiful and fundamental results, including Schutzenberger's characterization of star-free languages, and the Krohn-Rhodes decomposition theorem. We intend to extend this research for trees. The type of questions we want to answer is: what is the expressive power of first-order logic in trees? is there a Krohn-Rhodes decomposition theory for trees? what is a tree group? We expect that our study of tree logics will use algebraic techniques, possibly the setting of forest algebra (as introduced by the principal investigator and Igor Walukiewicz). We would also like to extend the algebraic setting beyond regular languages of finite trees, to e.g. infinite trees, or nonregular languages.
Summary
Logics for expressing properties of labeled trees and forests figure importantly in several different areas of Computer Science, including verification (branching temporal logics) and database theory (many XML query languages). The goal of this project is to investigate the expressive power of tree logics, mainly those logics that can be captured by tree automata. A similar study, but for word languages, is one of the main lines of research in formal language theory. The study of the expressive power of word logics has lead to many beautiful and fundamental results, including Schutzenberger's characterization of star-free languages, and the Krohn-Rhodes decomposition theorem. We intend to extend this research for trees. The type of questions we want to answer is: what is the expressive power of first-order logic in trees? is there a Krohn-Rhodes decomposition theory for trees? what is a tree group? We expect that our study of tree logics will use algebraic techniques, possibly the setting of forest algebra (as introduced by the principal investigator and Igor Walukiewicz). We would also like to extend the algebraic setting beyond regular languages of finite trees, to e.g. infinite trees, or nonregular languages.
Max ERC Funding
799 920 €
Duration
Start date: 2009-11-01, End date: 2014-10-31
Project acronym SYMCAR
Project Symbolic Computation and Automated Reasoning for Program Analysis
Researcher (PI) Laura Kovacs
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Individuals, industries, and nations are depending on software and systems using software. Automated approaches are needed to eliminate tedious aspects of software development, helping software developers in dealing with the increasing software complexity. Automatic program analysis aims to discover program properties preventing programmers from introducing errors while making changes and can drastically cut the time needed for program development.
This project addresses the challenge of automating program analysis, by developing rigorous mathematical techniques analyzing the logically complex parts of software. We will carry out novel research in computer theorem proving and symbolic computation, and integrate program analysis techniques with new approaches to program assertion synthesis and reasoning with both theories and quantifiers. The common theme of the project is a creative development of automated reasoning techniques based on our recently introduced symbol elimination method. Symbol elimination makes the project challenging, risky and interdisciplinary, bridging together computer science, mathematics, and logic.
Symbol elimination will enhance program analysis, in particular by generating polynomial and quantified first-order program properties that cannot be derived by other methods. As many program properties can best be expressed using quantified formulas with arithmetic, our project will make a significant step in analyzing large systems. Since program analysis requires reasoning in the combination of first-order logic and theories, we will design efficient algorithms for automated reasoning with both theories and quantifiers. Our results will be supported by the development of world-leading tools supporting symbol elimination in program analysis.
Our project brings breakthrough approaches to program analysis, which, together with other advances in the area, will reduce the cost of developing safe and reliable computer systems used in our daily life.
Summary
Individuals, industries, and nations are depending on software and systems using software. Automated approaches are needed to eliminate tedious aspects of software development, helping software developers in dealing with the increasing software complexity. Automatic program analysis aims to discover program properties preventing programmers from introducing errors while making changes and can drastically cut the time needed for program development.
This project addresses the challenge of automating program analysis, by developing rigorous mathematical techniques analyzing the logically complex parts of software. We will carry out novel research in computer theorem proving and symbolic computation, and integrate program analysis techniques with new approaches to program assertion synthesis and reasoning with both theories and quantifiers. The common theme of the project is a creative development of automated reasoning techniques based on our recently introduced symbol elimination method. Symbol elimination makes the project challenging, risky and interdisciplinary, bridging together computer science, mathematics, and logic.
Symbol elimination will enhance program analysis, in particular by generating polynomial and quantified first-order program properties that cannot be derived by other methods. As many program properties can best be expressed using quantified formulas with arithmetic, our project will make a significant step in analyzing large systems. Since program analysis requires reasoning in the combination of first-order logic and theories, we will design efficient algorithms for automated reasoning with both theories and quantifiers. Our results will be supported by the development of world-leading tools supporting symbol elimination in program analysis.
Our project brings breakthrough approaches to program analysis, which, together with other advances in the area, will reduce the cost of developing safe and reliable computer systems used in our daily life.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31