Project acronym ALPHA
Project Alpha Shape Theory Extended
Researcher (PI) Herbert Edelsbrunner
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Alpha shapes were invented in the early 80s of last century, and their implementation in three dimensions in the early 90s was at the forefront of the exact arithmetic paradigm that enabled fast and correct geometric software. In the late 90s, alpha shapes motivated the development of the wrap algorithm for surface reconstruction, and of persistent homology, which was the starting point of rapidly expanding interest in topological algorithms aimed at data analysis questions.
We now see alpha shapes, wrap complexes, and persistent homology as three aspects of a larger theory, which we propose to fully develop. This viewpoint was a long time coming and finds its clear expression within a generalized
version of discrete Morse theory. This unified framework offers new opportunities, including
(I) the adaptive reconstruction of shapes driven by the cavity structure;
(II) the stochastic analysis of all aspects of the theory;
(III) the computation of persistence of dense data, both in scale and in depth;
(IV) the study of long-range order in periodic and near-periodic point configurations.
These capabilities will significantly deepen as well as widen the theory and enable new applications in the sciences. To gain focus, we concentrate on low-dimensional applications in structural molecular biology and particle systems.
Summary
Alpha shapes were invented in the early 80s of last century, and their implementation in three dimensions in the early 90s was at the forefront of the exact arithmetic paradigm that enabled fast and correct geometric software. In the late 90s, alpha shapes motivated the development of the wrap algorithm for surface reconstruction, and of persistent homology, which was the starting point of rapidly expanding interest in topological algorithms aimed at data analysis questions.
We now see alpha shapes, wrap complexes, and persistent homology as three aspects of a larger theory, which we propose to fully develop. This viewpoint was a long time coming and finds its clear expression within a generalized
version of discrete Morse theory. This unified framework offers new opportunities, including
(I) the adaptive reconstruction of shapes driven by the cavity structure;
(II) the stochastic analysis of all aspects of the theory;
(III) the computation of persistence of dense data, both in scale and in depth;
(IV) the study of long-range order in periodic and near-periodic point configurations.
These capabilities will significantly deepen as well as widen the theory and enable new applications in the sciences. To gain focus, we concentrate on low-dimensional applications in structural molecular biology and particle systems.
Max ERC Funding
1 678 432 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym Big Splash
Project Big Splash: Efficient Simulation of Natural Phenomena at Extremely Large Scales
Researcher (PI) Christopher John Wojtan
Host Institution (HI) Institute of Science and Technology Austria
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Summary
Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Max ERC Funding
1 500 000 €
Duration
Start date: 2015-03-01, End date: 2020-02-29
Project acronym Browsec
Project Foundations and Tools for Client-Side Web Security
Researcher (PI) Matteo MAFFEI
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary The constantly increasing number of attacks on web applications shows how their rapid development has not been accompanied by adequate security foundations and demonstrates the lack of solid security enforcement tools. Indeed, web applications expose a gigantic attack surface, which hinders a rigorous understanding and enforcement of security properties. Hence, despite the worthwhile efforts to design secure web applications, users for a while will be confronted with vulnerable, or maliciously crafted, code. Unfortunately, end users have no way at present to reliably protect themselves from malicious applications.
BROWSEC will develop a holistic approach to client-side web security, laying its theoretical foundations and developing innovative security enforcement technologies. In particular, BROWSEC will deliver the first client-side tool to secure web applications that is practical, in that it is implemented as an extension and can thus be easily deployed at large, and also provably sound, i.e., backed up by machine-checked proofs that the tool provides end users with the required security guarantees. At the core of the proposal lies a novel monitoring technique, which treats the browser as a blackbox and intercepts its inputs and outputs in order to prevent dangerous information flows. With this lightweight monitoring approach, we aim at enforcing strong security properties without requiring any expensive and, given the dynamic nature of web applications, statically infeasible program analysis.
BROWSEC is thus a multidisciplinary research effort, promising practical impact and delivering breakthrough advancements in various disciplines, such as web security, JavaScript semantics, software engineering, and program verification.
Summary
The constantly increasing number of attacks on web applications shows how their rapid development has not been accompanied by adequate security foundations and demonstrates the lack of solid security enforcement tools. Indeed, web applications expose a gigantic attack surface, which hinders a rigorous understanding and enforcement of security properties. Hence, despite the worthwhile efforts to design secure web applications, users for a while will be confronted with vulnerable, or maliciously crafted, code. Unfortunately, end users have no way at present to reliably protect themselves from malicious applications.
BROWSEC will develop a holistic approach to client-side web security, laying its theoretical foundations and developing innovative security enforcement technologies. In particular, BROWSEC will deliver the first client-side tool to secure web applications that is practical, in that it is implemented as an extension and can thus be easily deployed at large, and also provably sound, i.e., backed up by machine-checked proofs that the tool provides end users with the required security guarantees. At the core of the proposal lies a novel monitoring technique, which treats the browser as a blackbox and intercepts its inputs and outputs in order to prevent dangerous information flows. With this lightweight monitoring approach, we aim at enforcing strong security properties without requiring any expensive and, given the dynamic nature of web applications, statically infeasible program analysis.
BROWSEC is thus a multidisciplinary research effort, promising practical impact and delivering breakthrough advancements in various disciplines, such as web security, JavaScript semantics, software engineering, and program verification.
Max ERC Funding
1 990 000 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym COMPLEX REASON
Project The Parameterized Complexity of Reasoning Problems
Researcher (PI) Stefan Szeider
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Summary
Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Max ERC Funding
1 421 130 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym Con Espressione
Project Getting at the Heart of Things: Towards Expressivity-aware Computer Systems in Music
Researcher (PI) Gerhard Widmer
Host Institution (HI) UNIVERSITAT LINZ
Call Details Advanced Grant (AdG), PE6, ERC-2014-ADG
Summary What makes music so important, what can make a performance so special and stirring? It is the things the music expresses, the emotions it induces, the associations it evokes, the drama and characters it portrays. The sources of this expressivity are manifold: the music itself, its structure, orchestration, personal associations, social settings, but also – and very importantly – the act of performance, the interpretation and expressive intentions made explicit by the musicians through nuances in timing, dynamics etc.
Thanks to research in fields like Music Information Research (MIR), computers can do many useful things with music, from beat and rhythm detection to song identification and tracking. However, they are still far from grasping the essence of music: they cannot tell whether a performance expresses playfulness or ennui, solemnity or gaiety, determination or uncertainty; they cannot produce music with a desired expressive quality; they cannot interact with human musicians in a truly musical way, recognising and responding to the expressive intentions implied in their playing.
The project is about developing machines that are aware of certain dimensions of expressivity, specifically in the domain of (classical) music, where expressivity is both essential and – at least as far as it relates to the act of performance – can be traced back to well-defined and measurable parametric dimensions (such as timing, dynamics, articulation). We will develop systems that can recognise, characterise, search music by expressive aspects, generate, modify, and react to expressive qualities in music. To do so, we will (1) bring together the fields of AI, Machine Learning, MIR and Music Performance Research; (2) integrate theories from Musicology to build more well-founded models of music understanding; (3) support model learning and validation with massive musical corpora of a size and quality unprecedented in computational music research.
Summary
What makes music so important, what can make a performance so special and stirring? It is the things the music expresses, the emotions it induces, the associations it evokes, the drama and characters it portrays. The sources of this expressivity are manifold: the music itself, its structure, orchestration, personal associations, social settings, but also – and very importantly – the act of performance, the interpretation and expressive intentions made explicit by the musicians through nuances in timing, dynamics etc.
Thanks to research in fields like Music Information Research (MIR), computers can do many useful things with music, from beat and rhythm detection to song identification and tracking. However, they are still far from grasping the essence of music: they cannot tell whether a performance expresses playfulness or ennui, solemnity or gaiety, determination or uncertainty; they cannot produce music with a desired expressive quality; they cannot interact with human musicians in a truly musical way, recognising and responding to the expressive intentions implied in their playing.
The project is about developing machines that are aware of certain dimensions of expressivity, specifically in the domain of (classical) music, where expressivity is both essential and – at least as far as it relates to the act of performance – can be traced back to well-defined and measurable parametric dimensions (such as timing, dynamics, articulation). We will develop systems that can recognise, characterise, search music by expressive aspects, generate, modify, and react to expressive qualities in music. To do so, we will (1) bring together the fields of AI, Machine Learning, MIR and Music Performance Research; (2) integrate theories from Musicology to build more well-founded models of music understanding; (3) support model learning and validation with massive musical corpora of a size and quality unprecedented in computational music research.
Max ERC Funding
2 318 750 €
Duration
Start date: 2016-01-01, End date: 2021-12-31
Project acronym DOiCV
Project Discrete Optimization in Computer Vision: Theory and Practice
Researcher (PI) Vladimir Kolmogorov
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary This proposal aims at developing new inference algorithms for graphical models with discrete variables, with a focus on the MAP estimation task. MAP estimation algorithms such as graph cuts have transformed computer vision in the last decade; they are now routinely used and are also utilized in commercial systems.
Topics of this project fall into 3 categories.
Theoretically-oriented: Graph cut techniques come from combinatorial optimization. They can minimize a certain class of functions, namely submodular functions with unary and pairwise terms. Larger classes of functions can be minimized in polynomial time. A complete characterization of such classes has been established. They include k-submodular functions for an integer k _ 1.
I investigate whether such tools from discrete optimization can lead to more efficient inference algorithms for practical problems. I have already found an important application of k-submodular functions for minimizing Potts energy functions that are frequently used in computer vision. The concept of submodularity also recently appeared in the context of the task of computing marginals in graphical models, here discrete optimization tools could be used.
Practically-oriented: Modern techniques such as graph cuts and tree-reweighted message passing give excellent results for some graphical models such as with the Potts energies. However, they fail for more complicated models. I aim to develop new tools for tackling such hard energies. This will include exploring tighter convex relaxations of the problem.
Applications, sequence tagging problems: Recently, we developed new algorithms for inference in pattern-based Conditional Random Fields (CRFs) on a chain. This model can naturally be applied to sequence tagging problems; it generalizes the popular CRF model by giving it more flexibility. I will investigate (i) applications to specific tasks, such as the protein secondary structure prediction, and (ii) ways to extend the model.
Summary
This proposal aims at developing new inference algorithms for graphical models with discrete variables, with a focus on the MAP estimation task. MAP estimation algorithms such as graph cuts have transformed computer vision in the last decade; they are now routinely used and are also utilized in commercial systems.
Topics of this project fall into 3 categories.
Theoretically-oriented: Graph cut techniques come from combinatorial optimization. They can minimize a certain class of functions, namely submodular functions with unary and pairwise terms. Larger classes of functions can be minimized in polynomial time. A complete characterization of such classes has been established. They include k-submodular functions for an integer k _ 1.
I investigate whether such tools from discrete optimization can lead to more efficient inference algorithms for practical problems. I have already found an important application of k-submodular functions for minimizing Potts energy functions that are frequently used in computer vision. The concept of submodularity also recently appeared in the context of the task of computing marginals in graphical models, here discrete optimization tools could be used.
Practically-oriented: Modern techniques such as graph cuts and tree-reweighted message passing give excellent results for some graphical models such as with the Potts energies. However, they fail for more complicated models. I aim to develop new tools for tackling such hard energies. This will include exploring tighter convex relaxations of the problem.
Applications, sequence tagging problems: Recently, we developed new algorithms for inference in pattern-based Conditional Random Fields (CRFs) on a chain. This model can naturally be applied to sequence tagging problems; it generalizes the popular CRF model by giving it more flexibility. I will investigate (i) applications to specific tasks, such as the protein secondary structure prediction, and (ii) ways to extend the model.
Max ERC Funding
1 641 585 €
Duration
Start date: 2014-06-01, End date: 2019-05-31
Project acronym EVOLOR
Project Cognitive Ageing in Dogs
Researcher (PI) Eniko Kubinyi
Host Institution (HI) EOTVOS LORAND TUDOMANYEGYETEM
Call Details Starting Grant (StG), LS9, ERC-2015-STG
Summary The aim of this project is to understand the causal factors contributing to the cognitive decline during senescence and to develop sensitive and standardized behaviour tests for early detection in order to increase the welfare of affected species. With the rapidly ageing population of Europe, related research is a priority in the European Union.
We will focus both on characterising the ageing phenotype and the underlying biological processes in dogs as a well-established natural animal model. We develop a reliable and valid test battery applying innovative multidisciplinary methods (e.g. eye-tracking, motion path analysis, identification of behaviour using inertial sensors, EEG, fMRI, candidate gene, and epigenetics) in both longitudinal and cross-sectional studies. We expect to reveal specific environmental risk factors which hasten ageing and also protective factors which may postpone it. We aim to provide objective criteria (behavioural, physiological and genetic biomarkers) to assess and predict the ageing trajectory for specific individual dogs. This would help veterinarians to recognise the symptoms early, and initiate necessary counter actions.
This approach establishes the framework for answering the broad question that how we can extend the healthy life of ageing dogs which indirectly also contributes to the welfare of the owner and decreases veterinary expenses. The detailed description of the ageing phenotype may also facilitate the use of dogs as a natural model for human senescence, including the development and application of pharmaceutical interventions.
We expect that our approach offers the scientific foundation to delay the onset of cognitive ageing in dog populations by 1-2 years, and also increase the proportion of dogs that enjoy healthy ageing.
Summary
The aim of this project is to understand the causal factors contributing to the cognitive decline during senescence and to develop sensitive and standardized behaviour tests for early detection in order to increase the welfare of affected species. With the rapidly ageing population of Europe, related research is a priority in the European Union.
We will focus both on characterising the ageing phenotype and the underlying biological processes in dogs as a well-established natural animal model. We develop a reliable and valid test battery applying innovative multidisciplinary methods (e.g. eye-tracking, motion path analysis, identification of behaviour using inertial sensors, EEG, fMRI, candidate gene, and epigenetics) in both longitudinal and cross-sectional studies. We expect to reveal specific environmental risk factors which hasten ageing and also protective factors which may postpone it. We aim to provide objective criteria (behavioural, physiological and genetic biomarkers) to assess and predict the ageing trajectory for specific individual dogs. This would help veterinarians to recognise the symptoms early, and initiate necessary counter actions.
This approach establishes the framework for answering the broad question that how we can extend the healthy life of ageing dogs which indirectly also contributes to the welfare of the owner and decreases veterinary expenses. The detailed description of the ageing phenotype may also facilitate the use of dogs as a natural model for human senescence, including the development and application of pharmaceutical interventions.
We expect that our approach offers the scientific foundation to delay the onset of cognitive ageing in dog populations by 1-2 years, and also increase the proportion of dogs that enjoy healthy ageing.
Max ERC Funding
1 202 500 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym FEAR-SAP
Project Function and Evolution of Attack and Response Strategies during Allelopathy in Plants
Researcher (PI) Claude BECKER
Host Institution (HI) GREGOR MENDEL INSTITUT FUR MOLEKULARE PFLANZENBIOLOGIE GMBH
Call Details Starting Grant (StG), LS9, ERC-2016-STG
Summary In natural and agricultural habitats, plants grow in organismal communities and therefore have to compete for limited resources. Competition between different crop plants and between crops and weeds leads to losses of potential agricultural product and requires heavy use of fertilizer and herbicides, with negative effects for the environment and human health. Plants have evolved various strategies to outcompete their neighbours and to secure their access to resources; one of them is the release of toxic chemical compounds into the soil that interfere with the growth of neighbouring plants. Many of today’s major crops, such as wheat, rye and maize, produce phytotoxins. Conversely, crop species also suffer from chemical attack by other plants growing in their vicinity. Although many of the chemical compounds applied in this biochemical warfare have been identified, we know little about how they act in the target plant; neither do we understand how some plant species are able to tolerate this chemical attack.
FEAR-SAP studies the genetic architecture that underlies biochemical plant-plant interference and the evolution of weed resistance to crop-released phytotoxins. To this end it employs a comprehensive array of molecular genetics, genomics and metagenomics analyses, unprecedented in the research on plant-plant competition. The aims of FEAR-SAP are to uncover the molecular targets of plant-derived phytotoxins and to identify the genetic components that are essential for tolerance to these substances. Moreover, FEAR-SAP investigates how the microbial community that is associated with the plant might enhance efficiency of the donor and/or mediate tolerance of the target plant. Ultimately, we will use this information to explore intelligent engineering of more refined and competitive crops, which will be at the foundation of efficient and ecologically responsible weed control and improved crop rotation strategies.
Summary
In natural and agricultural habitats, plants grow in organismal communities and therefore have to compete for limited resources. Competition between different crop plants and between crops and weeds leads to losses of potential agricultural product and requires heavy use of fertilizer and herbicides, with negative effects for the environment and human health. Plants have evolved various strategies to outcompete their neighbours and to secure their access to resources; one of them is the release of toxic chemical compounds into the soil that interfere with the growth of neighbouring plants. Many of today’s major crops, such as wheat, rye and maize, produce phytotoxins. Conversely, crop species also suffer from chemical attack by other plants growing in their vicinity. Although many of the chemical compounds applied in this biochemical warfare have been identified, we know little about how they act in the target plant; neither do we understand how some plant species are able to tolerate this chemical attack.
FEAR-SAP studies the genetic architecture that underlies biochemical plant-plant interference and the evolution of weed resistance to crop-released phytotoxins. To this end it employs a comprehensive array of molecular genetics, genomics and metagenomics analyses, unprecedented in the research on plant-plant competition. The aims of FEAR-SAP are to uncover the molecular targets of plant-derived phytotoxins and to identify the genetic components that are essential for tolerance to these substances. Moreover, FEAR-SAP investigates how the microbial community that is associated with the plant might enhance efficiency of the donor and/or mediate tolerance of the target plant. Ultimately, we will use this information to explore intelligent engineering of more refined and competitive crops, which will be at the foundation of efficient and ecologically responsible weed control and improved crop rotation strategies.
Max ERC Funding
1 500 000 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym GRAPH GAMES
Project Quantitative Graph Games: Theory and Applications
Researcher (PI) Krishnendu Chatterjee
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary The theory of games played on graphs provides the mathematical foundations to study numerous important problems in branches of mathematics, economics, computer science, biology, and other fields. One key application area in computer science is the formal verification of reactive systems. The system is modeled as a graph, in which vertices of the graph represent states of the system, edges represent transitions, and paths represent behavior of the system. The verification of the system in an arbitrary environment is then studied as a problem of game played on the graph, where the players represent the different interacting agents. Traditionally, these games have been studied either with Boolean objectives, or single quantitative objectives. However, for the problem of verification of systems that must behave correctly in resource-constrained environments (such as an embedded system) both Boolean and quantitative objectives are necessary: the Boolean objective for correctness specification and quantitative objective for resource-constraints. Thus we need to generalize the theory of graph games such that the objectives can express combinations of quantitative and Boolean objectives. In this project, we will focus on the following research objectives for the study of graph games with quantitative objectives:
(1) develop the mathematical theory and algorithms for the new class of games on graphs obtained by combining quantitative and Boolean objectives;
(2) develop practical techniques (such as compositional and abstraction techniques) that allow our algorithmic solutions be implemented efficiently to handle large game graphs;
(3) explore new application areas to demonstrate the application of quantitative graph games in diverse disciplines; and
(4) develop the theory of games on graphs with infinite state space and with quantitative objectives.
since the theory of graph games is foundational in several disciplines, new algorithmic solutions are expected.
Summary
The theory of games played on graphs provides the mathematical foundations to study numerous important problems in branches of mathematics, economics, computer science, biology, and other fields. One key application area in computer science is the formal verification of reactive systems. The system is modeled as a graph, in which vertices of the graph represent states of the system, edges represent transitions, and paths represent behavior of the system. The verification of the system in an arbitrary environment is then studied as a problem of game played on the graph, where the players represent the different interacting agents. Traditionally, these games have been studied either with Boolean objectives, or single quantitative objectives. However, for the problem of verification of systems that must behave correctly in resource-constrained environments (such as an embedded system) both Boolean and quantitative objectives are necessary: the Boolean objective for correctness specification and quantitative objective for resource-constraints. Thus we need to generalize the theory of graph games such that the objectives can express combinations of quantitative and Boolean objectives. In this project, we will focus on the following research objectives for the study of graph games with quantitative objectives:
(1) develop the mathematical theory and algorithms for the new class of games on graphs obtained by combining quantitative and Boolean objectives;
(2) develop practical techniques (such as compositional and abstraction techniques) that allow our algorithmic solutions be implemented efficiently to handle large game graphs;
(3) explore new application areas to demonstrate the application of quantitative graph games in diverse disciplines; and
(4) develop the theory of games on graphs with infinite state space and with quantitative objectives.
since the theory of graph games is foundational in several disciplines, new algorithmic solutions are expected.
Max ERC Funding
1 163 111 €
Duration
Start date: 2011-12-01, End date: 2016-11-30
Project acronym GRAPHALGAPP
Project Challenges in Graph Algorithms with Applications
Researcher (PI) Monika Hildegard Henzinger
Host Institution (HI) UNIVERSITAT WIEN
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary This project has two thrusts of equal importance. Firstly, it aims to develop new graph algorithmic techniques, specifically in the areas of dynamic graph algorithms, online algorithms and approximation algorithms for graph-based optimization problems. Thus, it proposes to solve long-standing, fundamental problems that are central to the field of algorithms. Secondly, it plans to apply these techniques to graph algorithmic problems in different fields of application, specifically in computer-aided verification, computational biology, and web-based advertisement with the goal of significantly advancing the state-of-the-art in these fields. This includes theoretical work as well as experimental evaluation on real-life data sets.
Thus, the goal of this project is a comprehensive approach to algorithms research which involves both excellent fundamental algorithms research as well as solving concrete applications.
Summary
This project has two thrusts of equal importance. Firstly, it aims to develop new graph algorithmic techniques, specifically in the areas of dynamic graph algorithms, online algorithms and approximation algorithms for graph-based optimization problems. Thus, it proposes to solve long-standing, fundamental problems that are central to the field of algorithms. Secondly, it plans to apply these techniques to graph algorithmic problems in different fields of application, specifically in computer-aided verification, computational biology, and web-based advertisement with the goal of significantly advancing the state-of-the-art in these fields. This includes theoretical work as well as experimental evaluation on real-life data sets.
Thus, the goal of this project is a comprehensive approach to algorithms research which involves both excellent fundamental algorithms research as well as solving concrete applications.
Max ERC Funding
2 428 258 €
Duration
Start date: 2014-03-01, End date: 2019-08-31