Project acronym ALPHA
Project Alpha Shape Theory Extended
Researcher (PI) Herbert Edelsbrunner
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Alpha shapes were invented in the early 80s of last century, and their implementation in three dimensions in the early 90s was at the forefront of the exact arithmetic paradigm that enabled fast and correct geometric software. In the late 90s, alpha shapes motivated the development of the wrap algorithm for surface reconstruction, and of persistent homology, which was the starting point of rapidly expanding interest in topological algorithms aimed at data analysis questions.
We now see alpha shapes, wrap complexes, and persistent homology as three aspects of a larger theory, which we propose to fully develop. This viewpoint was a long time coming and finds its clear expression within a generalized
version of discrete Morse theory. This unified framework offers new opportunities, including
(I) the adaptive reconstruction of shapes driven by the cavity structure;
(II) the stochastic analysis of all aspects of the theory;
(III) the computation of persistence of dense data, both in scale and in depth;
(IV) the study of long-range order in periodic and near-periodic point configurations.
These capabilities will significantly deepen as well as widen the theory and enable new applications in the sciences. To gain focus, we concentrate on low-dimensional applications in structural molecular biology and particle systems.
Summary
Alpha shapes were invented in the early 80s of last century, and their implementation in three dimensions in the early 90s was at the forefront of the exact arithmetic paradigm that enabled fast and correct geometric software. In the late 90s, alpha shapes motivated the development of the wrap algorithm for surface reconstruction, and of persistent homology, which was the starting point of rapidly expanding interest in topological algorithms aimed at data analysis questions.
We now see alpha shapes, wrap complexes, and persistent homology as three aspects of a larger theory, which we propose to fully develop. This viewpoint was a long time coming and finds its clear expression within a generalized
version of discrete Morse theory. This unified framework offers new opportunities, including
(I) the adaptive reconstruction of shapes driven by the cavity structure;
(II) the stochastic analysis of all aspects of the theory;
(III) the computation of persistence of dense data, both in scale and in depth;
(IV) the study of long-range order in periodic and near-periodic point configurations.
These capabilities will significantly deepen as well as widen the theory and enable new applications in the sciences. To gain focus, we concentrate on low-dimensional applications in structural molecular biology and particle systems.
Max ERC Funding
1 678 432 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym AYURYOG
Project Medicine, Immortality, Moksha: Entangled Histories of Yoga, Ayurveda and Alchemy in South Asia
Researcher (PI) Dagmar Wujastyk
Host Institution (HI) UNIVERSITAT WIEN
Call Details Starting Grant (StG), SH6, ERC-2014-STG
Summary The project will examine the histories of yoga, ayurveda and rasashastra (Indian alchemy and iatrochemistry) from the tenth century to the present, focussing on the disciplines' health, rejuvenation and longevity practices. The goals of the project are to reveal the entanglements of these historical traditions, and to trace the trajectories of their evolution as components of today's global healthcare and personal development industries.
Our hypothesis is that practices aimed at achieving health, rejuvenation and longevity constitute a key area of exchange between the three disciplines, preparing the grounds for a series of important pharmaceutical and technological innovations and also profoundly influencing the discourses of today's medicalized forms of globalized yoga as well as of contemporary institutionalized forms of ayurveda and rasashastra.
Drawing upon the primary historical sources of each respective tradition as well as on fieldwork data, the research team will explore the shared terminology, praxis and theory of these three disciplines. We will examine why, when and how health, rejuvenation and longevity practices were employed; how each discipline’s discourse and practical applications relates to those of the others; and how past encounters and cross-fertilizations impact on contemporary health-related practices in yogic, ayurvedic and alchemists’ milieus.
The five-year project will be based at the Department of South Asian, Tibetan and Buddhist Studies at Vienna University and carried out by an international team of 3 post-doctoral researchers. The research will be grounded in the fields of South Asian studies and social history. An international workshop and an international conference will be organized to present and discuss the research results, which will also be published in peer-reviewed journals, an edited volume, and in individual monographs. A project website will provide open access to all research results.
Summary
The project will examine the histories of yoga, ayurveda and rasashastra (Indian alchemy and iatrochemistry) from the tenth century to the present, focussing on the disciplines' health, rejuvenation and longevity practices. The goals of the project are to reveal the entanglements of these historical traditions, and to trace the trajectories of their evolution as components of today's global healthcare and personal development industries.
Our hypothesis is that practices aimed at achieving health, rejuvenation and longevity constitute a key area of exchange between the three disciplines, preparing the grounds for a series of important pharmaceutical and technological innovations and also profoundly influencing the discourses of today's medicalized forms of globalized yoga as well as of contemporary institutionalized forms of ayurveda and rasashastra.
Drawing upon the primary historical sources of each respective tradition as well as on fieldwork data, the research team will explore the shared terminology, praxis and theory of these three disciplines. We will examine why, when and how health, rejuvenation and longevity practices were employed; how each discipline’s discourse and practical applications relates to those of the others; and how past encounters and cross-fertilizations impact on contemporary health-related practices in yogic, ayurvedic and alchemists’ milieus.
The five-year project will be based at the Department of South Asian, Tibetan and Buddhist Studies at Vienna University and carried out by an international team of 3 post-doctoral researchers. The research will be grounded in the fields of South Asian studies and social history. An international workshop and an international conference will be organized to present and discuss the research results, which will also be published in peer-reviewed journals, an edited volume, and in individual monographs. A project website will provide open access to all research results.
Max ERC Funding
1 416 146 €
Duration
Start date: 2015-06-01, End date: 2020-05-31
Project acronym Big Splash
Project Big Splash: Efficient Simulation of Natural Phenomena at Extremely Large Scales
Researcher (PI) Christopher John Wojtan
Host Institution (HI) Institute of Science and Technology Austria
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Summary
Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Max ERC Funding
1 500 000 €
Duration
Start date: 2015-03-01, End date: 2020-02-29
Project acronym Browsec
Project Foundations and Tools for Client-Side Web Security
Researcher (PI) Matteo MAFFEI
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary The constantly increasing number of attacks on web applications shows how their rapid development has not been accompanied by adequate security foundations and demonstrates the lack of solid security enforcement tools. Indeed, web applications expose a gigantic attack surface, which hinders a rigorous understanding and enforcement of security properties. Hence, despite the worthwhile efforts to design secure web applications, users for a while will be confronted with vulnerable, or maliciously crafted, code. Unfortunately, end users have no way at present to reliably protect themselves from malicious applications.
BROWSEC will develop a holistic approach to client-side web security, laying its theoretical foundations and developing innovative security enforcement technologies. In particular, BROWSEC will deliver the first client-side tool to secure web applications that is practical, in that it is implemented as an extension and can thus be easily deployed at large, and also provably sound, i.e., backed up by machine-checked proofs that the tool provides end users with the required security guarantees. At the core of the proposal lies a novel monitoring technique, which treats the browser as a blackbox and intercepts its inputs and outputs in order to prevent dangerous information flows. With this lightweight monitoring approach, we aim at enforcing strong security properties without requiring any expensive and, given the dynamic nature of web applications, statically infeasible program analysis.
BROWSEC is thus a multidisciplinary research effort, promising practical impact and delivering breakthrough advancements in various disciplines, such as web security, JavaScript semantics, software engineering, and program verification.
Summary
The constantly increasing number of attacks on web applications shows how their rapid development has not been accompanied by adequate security foundations and demonstrates the lack of solid security enforcement tools. Indeed, web applications expose a gigantic attack surface, which hinders a rigorous understanding and enforcement of security properties. Hence, despite the worthwhile efforts to design secure web applications, users for a while will be confronted with vulnerable, or maliciously crafted, code. Unfortunately, end users have no way at present to reliably protect themselves from malicious applications.
BROWSEC will develop a holistic approach to client-side web security, laying its theoretical foundations and developing innovative security enforcement technologies. In particular, BROWSEC will deliver the first client-side tool to secure web applications that is practical, in that it is implemented as an extension and can thus be easily deployed at large, and also provably sound, i.e., backed up by machine-checked proofs that the tool provides end users with the required security guarantees. At the core of the proposal lies a novel monitoring technique, which treats the browser as a blackbox and intercepts its inputs and outputs in order to prevent dangerous information flows. With this lightweight monitoring approach, we aim at enforcing strong security properties without requiring any expensive and, given the dynamic nature of web applications, statically infeasible program analysis.
BROWSEC is thus a multidisciplinary research effort, promising practical impact and delivering breakthrough advancements in various disciplines, such as web security, JavaScript semantics, software engineering, and program verification.
Max ERC Funding
1 990 000 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym COMPLEX REASON
Project The Parameterized Complexity of Reasoning Problems
Researcher (PI) Stefan Szeider
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Summary
Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Max ERC Funding
1 421 130 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym Con Espressione
Project Getting at the Heart of Things: Towards Expressivity-aware Computer Systems in Music
Researcher (PI) Gerhard Widmer
Host Institution (HI) UNIVERSITAT LINZ
Call Details Advanced Grant (AdG), PE6, ERC-2014-ADG
Summary What makes music so important, what can make a performance so special and stirring? It is the things the music expresses, the emotions it induces, the associations it evokes, the drama and characters it portrays. The sources of this expressivity are manifold: the music itself, its structure, orchestration, personal associations, social settings, but also – and very importantly – the act of performance, the interpretation and expressive intentions made explicit by the musicians through nuances in timing, dynamics etc.
Thanks to research in fields like Music Information Research (MIR), computers can do many useful things with music, from beat and rhythm detection to song identification and tracking. However, they are still far from grasping the essence of music: they cannot tell whether a performance expresses playfulness or ennui, solemnity or gaiety, determination or uncertainty; they cannot produce music with a desired expressive quality; they cannot interact with human musicians in a truly musical way, recognising and responding to the expressive intentions implied in their playing.
The project is about developing machines that are aware of certain dimensions of expressivity, specifically in the domain of (classical) music, where expressivity is both essential and – at least as far as it relates to the act of performance – can be traced back to well-defined and measurable parametric dimensions (such as timing, dynamics, articulation). We will develop systems that can recognise, characterise, search music by expressive aspects, generate, modify, and react to expressive qualities in music. To do so, we will (1) bring together the fields of AI, Machine Learning, MIR and Music Performance Research; (2) integrate theories from Musicology to build more well-founded models of music understanding; (3) support model learning and validation with massive musical corpora of a size and quality unprecedented in computational music research.
Summary
What makes music so important, what can make a performance so special and stirring? It is the things the music expresses, the emotions it induces, the associations it evokes, the drama and characters it portrays. The sources of this expressivity are manifold: the music itself, its structure, orchestration, personal associations, social settings, but also – and very importantly – the act of performance, the interpretation and expressive intentions made explicit by the musicians through nuances in timing, dynamics etc.
Thanks to research in fields like Music Information Research (MIR), computers can do many useful things with music, from beat and rhythm detection to song identification and tracking. However, they are still far from grasping the essence of music: they cannot tell whether a performance expresses playfulness or ennui, solemnity or gaiety, determination or uncertainty; they cannot produce music with a desired expressive quality; they cannot interact with human musicians in a truly musical way, recognising and responding to the expressive intentions implied in their playing.
The project is about developing machines that are aware of certain dimensions of expressivity, specifically in the domain of (classical) music, where expressivity is both essential and – at least as far as it relates to the act of performance – can be traced back to well-defined and measurable parametric dimensions (such as timing, dynamics, articulation). We will develop systems that can recognise, characterise, search music by expressive aspects, generate, modify, and react to expressive qualities in music. To do so, we will (1) bring together the fields of AI, Machine Learning, MIR and Music Performance Research; (2) integrate theories from Musicology to build more well-founded models of music understanding; (3) support model learning and validation with massive musical corpora of a size and quality unprecedented in computational music research.
Max ERC Funding
2 318 750 €
Duration
Start date: 2016-01-01, End date: 2021-12-31
Project acronym CT
Project ‘Challenging Time(s)’ – A New Approach to Written Sources for Ancient Egyptian Chronology
Researcher (PI) Roman GUNDACKER
Host Institution (HI) OESTERREICHISCHE AKADEMIE DER WISSENSCHAFTEN
Call Details Starting Grant (StG), SH6, ERC-2017-STG
Summary The chronology of ancient Egypt is a golden thread for the memory of early civilisation. It is not only the scaffolding of four millennia of Egyptian history, but also one of the pillars of the chronology of the entire ancient Near East and eastern Mediterranean. The basic division of Egyptian history into 31 dynasties was introduced by Manetho, an Egyptian historian (c. 280 BC) writing in Greek for the Ptolemaic kings. Despite the fact that this scheme was adopted by Egyptologists 200 years ago and remains in use until today, there has never been an in-depth analysis of Manetho’s kinglist and of the names in it. Until now, identifying the Greek renderings of royal names with their hieroglyphic counterparts was more or less educated guesswork. It is thus essential to introduce the principles of textual criticism, to evaluate royal names on a firm linguistic basis and to provide for the first time ever an Egyptological commentary on Manetho’s kinglist. Just like Manetho did long ago, now it is necessary to gather all inscriptional evidence on Egyptian history: dated inscriptions, biographic and prosopographic data of royalty and commoners, genuine Egyptian kinglists and annals. These data must be critically evaluated in context, their assignment to specific reigns must be reconsidered, and genealogies and sequences of officials must be reviewed. The results are not only important for Egyptian historical chronology and for our understanding of the Egyptian perception of history, but also for the interpretation of chronological data gained from archaeological excavations (material culture) and sciences (14C dates, which are interpreted on the basis of historical chronology, e.g., via ‘Bayesian modelling’). The applicant has already shown the significance of this approach in pilot studies on the pyramid age. Further work in cooperation with international specialists will thus shed new light on ancient sources in order to determine the chronology of early civilisation.
Summary
The chronology of ancient Egypt is a golden thread for the memory of early civilisation. It is not only the scaffolding of four millennia of Egyptian history, but also one of the pillars of the chronology of the entire ancient Near East and eastern Mediterranean. The basic division of Egyptian history into 31 dynasties was introduced by Manetho, an Egyptian historian (c. 280 BC) writing in Greek for the Ptolemaic kings. Despite the fact that this scheme was adopted by Egyptologists 200 years ago and remains in use until today, there has never been an in-depth analysis of Manetho’s kinglist and of the names in it. Until now, identifying the Greek renderings of royal names with their hieroglyphic counterparts was more or less educated guesswork. It is thus essential to introduce the principles of textual criticism, to evaluate royal names on a firm linguistic basis and to provide for the first time ever an Egyptological commentary on Manetho’s kinglist. Just like Manetho did long ago, now it is necessary to gather all inscriptional evidence on Egyptian history: dated inscriptions, biographic and prosopographic data of royalty and commoners, genuine Egyptian kinglists and annals. These data must be critically evaluated in context, their assignment to specific reigns must be reconsidered, and genealogies and sequences of officials must be reviewed. The results are not only important for Egyptian historical chronology and for our understanding of the Egyptian perception of history, but also for the interpretation of chronological data gained from archaeological excavations (material culture) and sciences (14C dates, which are interpreted on the basis of historical chronology, e.g., via ‘Bayesian modelling’). The applicant has already shown the significance of this approach in pilot studies on the pyramid age. Further work in cooperation with international specialists will thus shed new light on ancient sources in order to determine the chronology of early civilisation.
Max ERC Funding
1 499 992 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym DOiCV
Project Discrete Optimization in Computer Vision: Theory and Practice
Researcher (PI) Vladimir Kolmogorov
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary This proposal aims at developing new inference algorithms for graphical models with discrete variables, with a focus on the MAP estimation task. MAP estimation algorithms such as graph cuts have transformed computer vision in the last decade; they are now routinely used and are also utilized in commercial systems.
Topics of this project fall into 3 categories.
Theoretically-oriented: Graph cut techniques come from combinatorial optimization. They can minimize a certain class of functions, namely submodular functions with unary and pairwise terms. Larger classes of functions can be minimized in polynomial time. A complete characterization of such classes has been established. They include k-submodular functions for an integer k _ 1.
I investigate whether such tools from discrete optimization can lead to more efficient inference algorithms for practical problems. I have already found an important application of k-submodular functions for minimizing Potts energy functions that are frequently used in computer vision. The concept of submodularity also recently appeared in the context of the task of computing marginals in graphical models, here discrete optimization tools could be used.
Practically-oriented: Modern techniques such as graph cuts and tree-reweighted message passing give excellent results for some graphical models such as with the Potts energies. However, they fail for more complicated models. I aim to develop new tools for tackling such hard energies. This will include exploring tighter convex relaxations of the problem.
Applications, sequence tagging problems: Recently, we developed new algorithms for inference in pattern-based Conditional Random Fields (CRFs) on a chain. This model can naturally be applied to sequence tagging problems; it generalizes the popular CRF model by giving it more flexibility. I will investigate (i) applications to specific tasks, such as the protein secondary structure prediction, and (ii) ways to extend the model.
Summary
This proposal aims at developing new inference algorithms for graphical models with discrete variables, with a focus on the MAP estimation task. MAP estimation algorithms such as graph cuts have transformed computer vision in the last decade; they are now routinely used and are also utilized in commercial systems.
Topics of this project fall into 3 categories.
Theoretically-oriented: Graph cut techniques come from combinatorial optimization. They can minimize a certain class of functions, namely submodular functions with unary and pairwise terms. Larger classes of functions can be minimized in polynomial time. A complete characterization of such classes has been established. They include k-submodular functions for an integer k _ 1.
I investigate whether such tools from discrete optimization can lead to more efficient inference algorithms for practical problems. I have already found an important application of k-submodular functions for minimizing Potts energy functions that are frequently used in computer vision. The concept of submodularity also recently appeared in the context of the task of computing marginals in graphical models, here discrete optimization tools could be used.
Practically-oriented: Modern techniques such as graph cuts and tree-reweighted message passing give excellent results for some graphical models such as with the Potts energies. However, they fail for more complicated models. I aim to develop new tools for tackling such hard energies. This will include exploring tighter convex relaxations of the problem.
Applications, sequence tagging problems: Recently, we developed new algorithms for inference in pattern-based Conditional Random Fields (CRFs) on a chain. This model can naturally be applied to sequence tagging problems; it generalizes the popular CRF model by giving it more flexibility. I will investigate (i) applications to specific tasks, such as the protein secondary structure prediction, and (ii) ways to extend the model.
Max ERC Funding
1 641 585 €
Duration
Start date: 2014-06-01, End date: 2019-05-31
Project acronym GRAPHALGAPP
Project Challenges in Graph Algorithms with Applications
Researcher (PI) Monika Hildegard Henzinger
Host Institution (HI) UNIVERSITAT WIEN
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary This project has two thrusts of equal importance. Firstly, it aims to develop new graph algorithmic techniques, specifically in the areas of dynamic graph algorithms, online algorithms and approximation algorithms for graph-based optimization problems. Thus, it proposes to solve long-standing, fundamental problems that are central to the field of algorithms. Secondly, it plans to apply these techniques to graph algorithmic problems in different fields of application, specifically in computer-aided verification, computational biology, and web-based advertisement with the goal of significantly advancing the state-of-the-art in these fields. This includes theoretical work as well as experimental evaluation on real-life data sets.
Thus, the goal of this project is a comprehensive approach to algorithms research which involves both excellent fundamental algorithms research as well as solving concrete applications.
Summary
This project has two thrusts of equal importance. Firstly, it aims to develop new graph algorithmic techniques, specifically in the areas of dynamic graph algorithms, online algorithms and approximation algorithms for graph-based optimization problems. Thus, it proposes to solve long-standing, fundamental problems that are central to the field of algorithms. Secondly, it plans to apply these techniques to graph algorithmic problems in different fields of application, specifically in computer-aided verification, computational biology, and web-based advertisement with the goal of significantly advancing the state-of-the-art in these fields. This includes theoretical work as well as experimental evaluation on real-life data sets.
Thus, the goal of this project is a comprehensive approach to algorithms research which involves both excellent fundamental algorithms research as well as solving concrete applications.
Max ERC Funding
2 428 258 €
Duration
Start date: 2014-03-01, End date: 2019-08-31
Project acronym HOMOVIS
Project High-level Prior Models for Computer Vision
Researcher (PI) Thomas Pock
Host Institution (HI) TECHNISCHE UNIVERSITAET GRAZ
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Since more than 50 years, computer vision has been a very active research field but it is still far away from the abilities of the human visual system. This stunning performance of the human visual system can be mainly contributed to a highly efficient three-layer architecture: A low-level layer that sparsifies the visual information by detecting important image features such as image gradients, a mid-level layer that implements disocclusion and boundary completion processes and finally a high-level layer that is concerned with the recognition of objects.
Variational methods are certainly one of the most successful methods for low-level vision. However, it is very unlikely that these methods can be further improved without the integration of high-level prior models. Therefore, we propose a unified mathematical framework that allows for a natural integration of high-level priors into low-level variational models. In particular, we propose to represent images in a higher-dimensional space which is inspired by the architecture for the visual cortex. This space performs a decomposition of the image gradients into magnitude and direction and hence performs a lifting of the 2D image to a 3D space. This has several advantages: Firstly, the higher-dimensional embedding allows to implement mid-level tasks such as boundary completion and disocclusion processes in a very natural way. Secondly, the lifted space allows for an explicit access to the orientation and the magnitude of image gradients. In turn, distributions of gradient orientations – known to be highly effective for object detection – can be utilized as high-level priors. This inverts the bottom-up nature of object detectors and hence adds an efficient top-down process to low-level variational models.
The developed mathematical approaches will go significantly beyond traditional variational models for computer vision and hence will define a new state-of-the-art in the field.
Summary
Since more than 50 years, computer vision has been a very active research field but it is still far away from the abilities of the human visual system. This stunning performance of the human visual system can be mainly contributed to a highly efficient three-layer architecture: A low-level layer that sparsifies the visual information by detecting important image features such as image gradients, a mid-level layer that implements disocclusion and boundary completion processes and finally a high-level layer that is concerned with the recognition of objects.
Variational methods are certainly one of the most successful methods for low-level vision. However, it is very unlikely that these methods can be further improved without the integration of high-level prior models. Therefore, we propose a unified mathematical framework that allows for a natural integration of high-level priors into low-level variational models. In particular, we propose to represent images in a higher-dimensional space which is inspired by the architecture for the visual cortex. This space performs a decomposition of the image gradients into magnitude and direction and hence performs a lifting of the 2D image to a 3D space. This has several advantages: Firstly, the higher-dimensional embedding allows to implement mid-level tasks such as boundary completion and disocclusion processes in a very natural way. Secondly, the lifted space allows for an explicit access to the orientation and the magnitude of image gradients. In turn, distributions of gradient orientations – known to be highly effective for object detection – can be utilized as high-level priors. This inverts the bottom-up nature of object detectors and hence adds an efficient top-down process to low-level variational models.
The developed mathematical approaches will go significantly beyond traditional variational models for computer vision and hence will define a new state-of-the-art in the field.
Max ERC Funding
1 473 525 €
Duration
Start date: 2015-06-01, End date: 2020-05-31