Project acronym Big Splash
Project Big Splash: Efficient Simulation of Natural Phenomena at Extremely Large Scales
Researcher (PI) Christopher John Wojtan
Host Institution (HI) Institute of Science and Technology Austria
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Summary
Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Max ERC Funding
1 500 000 €
Duration
Start date: 2015-03-01, End date: 2020-02-29
Project acronym HOMOVIS
Project High-level Prior Models for Computer Vision
Researcher (PI) Thomas Pock
Host Institution (HI) TECHNISCHE UNIVERSITAET GRAZ
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Since more than 50 years, computer vision has been a very active research field but it is still far away from the abilities of the human visual system. This stunning performance of the human visual system can be mainly contributed to a highly efficient three-layer architecture: A low-level layer that sparsifies the visual information by detecting important image features such as image gradients, a mid-level layer that implements disocclusion and boundary completion processes and finally a high-level layer that is concerned with the recognition of objects.
Variational methods are certainly one of the most successful methods for low-level vision. However, it is very unlikely that these methods can be further improved without the integration of high-level prior models. Therefore, we propose a unified mathematical framework that allows for a natural integration of high-level priors into low-level variational models. In particular, we propose to represent images in a higher-dimensional space which is inspired by the architecture for the visual cortex. This space performs a decomposition of the image gradients into magnitude and direction and hence performs a lifting of the 2D image to a 3D space. This has several advantages: Firstly, the higher-dimensional embedding allows to implement mid-level tasks such as boundary completion and disocclusion processes in a very natural way. Secondly, the lifted space allows for an explicit access to the orientation and the magnitude of image gradients. In turn, distributions of gradient orientations – known to be highly effective for object detection – can be utilized as high-level priors. This inverts the bottom-up nature of object detectors and hence adds an efficient top-down process to low-level variational models.
The developed mathematical approaches will go significantly beyond traditional variational models for computer vision and hence will define a new state-of-the-art in the field.
Summary
Since more than 50 years, computer vision has been a very active research field but it is still far away from the abilities of the human visual system. This stunning performance of the human visual system can be mainly contributed to a highly efficient three-layer architecture: A low-level layer that sparsifies the visual information by detecting important image features such as image gradients, a mid-level layer that implements disocclusion and boundary completion processes and finally a high-level layer that is concerned with the recognition of objects.
Variational methods are certainly one of the most successful methods for low-level vision. However, it is very unlikely that these methods can be further improved without the integration of high-level prior models. Therefore, we propose a unified mathematical framework that allows for a natural integration of high-level priors into low-level variational models. In particular, we propose to represent images in a higher-dimensional space which is inspired by the architecture for the visual cortex. This space performs a decomposition of the image gradients into magnitude and direction and hence performs a lifting of the 2D image to a 3D space. This has several advantages: Firstly, the higher-dimensional embedding allows to implement mid-level tasks such as boundary completion and disocclusion processes in a very natural way. Secondly, the lifted space allows for an explicit access to the orientation and the magnitude of image gradients. In turn, distributions of gradient orientations – known to be highly effective for object detection – can be utilized as high-level priors. This inverts the bottom-up nature of object detectors and hence adds an efficient top-down process to low-level variational models.
The developed mathematical approaches will go significantly beyond traditional variational models for computer vision and hence will define a new state-of-the-art in the field.
Max ERC Funding
1 473 525 €
Duration
Start date: 2015-06-01, End date: 2020-05-31
Project acronym MARIPOLDATA
Project The Politics of Marine Biodiversity Data: Global and National Policies and Practices of Monitoring the Oceans
Researcher (PI) Alice VADROT
Host Institution (HI) UNIVERSITAT WIEN
Call Details Starting Grant (StG), SH2, ERC-2018-STG
Summary In order to protect marine biodiversity and ensure that benefits are equally shared, the UN General Assembly has decided to develop a new legally binding treaty under the United Nations Convention on the Law of the Sea. Marine biodiversity data will play a central role: Firstly, in supporting intergovernmental efforts to identify, protect and monitor marine biodiversity. Secondly, in informing governments interested in particular aspects of marine biodiversity, including its economic use and its contribution to biosecurity. In examining how this data are represented and used, this project will create a novel understanding of the materiality of science-policy interrelations and identify new forms of power in global environmental politics as well as develop the methodologies to do so. This is crucial, because the capacities to develop and use data infrastructures are unequally distributed among countries and global initiatives for data sharing are significantly challenged by conflicting perceptions of who benefits from marine biodiversity research. Despite broad recognition of these challenges within natural science communities the political aspects of marine biodiversity data remain understudied. Academic debates tend to neglect the role of international politics in legitimising and authorising scientific concepts, data sources and criteria and how this influences national monitoring priorities. The central objective of MARIPOLDATA is to overcome these shortcomings by developing and applying a new multiscale methodology for grounding the analysis of science-policy interrelations in empirical research. An interdisciplinary team, led by the PI, will collect and analyse data across different policy-levels and spatial scales by combining 1) ethnographic studies at intergovernmental negotiation sites with 2) a comparative analysis of national biodiversity monitoring policies and practices and 3) bibliometric and social network analyses and oral history interviews for mapping marine biodiversity science.
Summary
In order to protect marine biodiversity and ensure that benefits are equally shared, the UN General Assembly has decided to develop a new legally binding treaty under the United Nations Convention on the Law of the Sea. Marine biodiversity data will play a central role: Firstly, in supporting intergovernmental efforts to identify, protect and monitor marine biodiversity. Secondly, in informing governments interested in particular aspects of marine biodiversity, including its economic use and its contribution to biosecurity. In examining how this data are represented and used, this project will create a novel understanding of the materiality of science-policy interrelations and identify new forms of power in global environmental politics as well as develop the methodologies to do so. This is crucial, because the capacities to develop and use data infrastructures are unequally distributed among countries and global initiatives for data sharing are significantly challenged by conflicting perceptions of who benefits from marine biodiversity research. Despite broad recognition of these challenges within natural science communities the political aspects of marine biodiversity data remain understudied. Academic debates tend to neglect the role of international politics in legitimising and authorising scientific concepts, data sources and criteria and how this influences national monitoring priorities. The central objective of MARIPOLDATA is to overcome these shortcomings by developing and applying a new multiscale methodology for grounding the analysis of science-policy interrelations in empirical research. An interdisciplinary team, led by the PI, will collect and analyse data across different policy-levels and spatial scales by combining 1) ethnographic studies at intergovernmental negotiation sites with 2) a comparative analysis of national biodiversity monitoring policies and practices and 3) bibliometric and social network analyses and oral history interviews for mapping marine biodiversity science.
Max ERC Funding
1 391 932 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym ScaleML
Project Elastic Coordination for Scalable Machine Learning
Researcher (PI) Dan ALISTARH
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Machine learning and data science are areas of tremendous progress over the last decade, leading to exciting research developments, and significant practical impact. Broadly, progress in this area has been enabled by the rapidly increasing availability of data, by better algorithms, and by large-scale platforms enabling efficient computation on immense datasets. While it is reasonable to expect that the first two trends will continue for the foreseeable future, the same cannot be said of the third trend, of continually increasing computational performance. Increasing computational demands place immense pressure on algorithms and systems to scale, while the performance limits of traditional computing paradigms are becoming increasingly apparent. Thus, the question of building algorithms and systems for scalable machine learning is extremely pressing. The project will take a decisive step to answer this challenge, developing new abstractions, algorithms and system support for scalable machine learning. In a nutshell, the line of approach is elastic coordination: allowing machine learning algorithms to approximate and/or randomize their synchronization and communication semantics, in a structured, controlled fashion, to achieve scalability. The project exploits the insight that many such algorithms are inherently stochastic, and hence robust to inconsistencies. My thesis is that elastic coordination can lead to significant, consistent performance improvements across a wide range of applications, while guaranteeing provably correct answers. ScaleML will apply elastic coordination to two specific relevant scenarios: scalability inside a single multi-threaded machine, and scalability across networks of machines.
Conceptually, the project’s impact is in providing a set of new design principles and algorithms for scalable computation. It will develop these insights into a set of tools and working examples for scalable distributed machine learning.
Summary
Machine learning and data science are areas of tremendous progress over the last decade, leading to exciting research developments, and significant practical impact. Broadly, progress in this area has been enabled by the rapidly increasing availability of data, by better algorithms, and by large-scale platforms enabling efficient computation on immense datasets. While it is reasonable to expect that the first two trends will continue for the foreseeable future, the same cannot be said of the third trend, of continually increasing computational performance. Increasing computational demands place immense pressure on algorithms and systems to scale, while the performance limits of traditional computing paradigms are becoming increasingly apparent. Thus, the question of building algorithms and systems for scalable machine learning is extremely pressing. The project will take a decisive step to answer this challenge, developing new abstractions, algorithms and system support for scalable machine learning. In a nutshell, the line of approach is elastic coordination: allowing machine learning algorithms to approximate and/or randomize their synchronization and communication semantics, in a structured, controlled fashion, to achieve scalability. The project exploits the insight that many such algorithms are inherently stochastic, and hence robust to inconsistencies. My thesis is that elastic coordination can lead to significant, consistent performance improvements across a wide range of applications, while guaranteeing provably correct answers. ScaleML will apply elastic coordination to two specific relevant scenarios: scalability inside a single multi-threaded machine, and scalability across networks of machines.
Conceptually, the project’s impact is in providing a set of new design principles and algorithms for scalable computation. It will develop these insights into a set of tools and working examples for scalable distributed machine learning.
Max ERC Funding
1 494 121 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym SYMCAR
Project Symbolic Computation and Automated Reasoning for Program Analysis
Researcher (PI) Laura Kovacs
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Individuals, industries, and nations are depending on software and systems using software. Automated approaches are needed to eliminate tedious aspects of software development, helping software developers in dealing with the increasing software complexity. Automatic program analysis aims to discover program properties preventing programmers from introducing errors while making changes and can drastically cut the time needed for program development.
This project addresses the challenge of automating program analysis, by developing rigorous mathematical techniques analyzing the logically complex parts of software. We will carry out novel research in computer theorem proving and symbolic computation, and integrate program analysis techniques with new approaches to program assertion synthesis and reasoning with both theories and quantifiers. The common theme of the project is a creative development of automated reasoning techniques based on our recently introduced symbol elimination method. Symbol elimination makes the project challenging, risky and interdisciplinary, bridging together computer science, mathematics, and logic.
Symbol elimination will enhance program analysis, in particular by generating polynomial and quantified first-order program properties that cannot be derived by other methods. As many program properties can best be expressed using quantified formulas with arithmetic, our project will make a significant step in analyzing large systems. Since program analysis requires reasoning in the combination of first-order logic and theories, we will design efficient algorithms for automated reasoning with both theories and quantifiers. Our results will be supported by the development of world-leading tools supporting symbol elimination in program analysis.
Our project brings breakthrough approaches to program analysis, which, together with other advances in the area, will reduce the cost of developing safe and reliable computer systems used in our daily life.
Summary
Individuals, industries, and nations are depending on software and systems using software. Automated approaches are needed to eliminate tedious aspects of software development, helping software developers in dealing with the increasing software complexity. Automatic program analysis aims to discover program properties preventing programmers from introducing errors while making changes and can drastically cut the time needed for program development.
This project addresses the challenge of automating program analysis, by developing rigorous mathematical techniques analyzing the logically complex parts of software. We will carry out novel research in computer theorem proving and symbolic computation, and integrate program analysis techniques with new approaches to program assertion synthesis and reasoning with both theories and quantifiers. The common theme of the project is a creative development of automated reasoning techniques based on our recently introduced symbol elimination method. Symbol elimination makes the project challenging, risky and interdisciplinary, bridging together computer science, mathematics, and logic.
Symbol elimination will enhance program analysis, in particular by generating polynomial and quantified first-order program properties that cannot be derived by other methods. As many program properties can best be expressed using quantified formulas with arithmetic, our project will make a significant step in analyzing large systems. Since program analysis requires reasoning in the combination of first-order logic and theories, we will design efficient algorithms for automated reasoning with both theories and quantifiers. Our results will be supported by the development of world-leading tools supporting symbol elimination in program analysis.
Our project brings breakthrough approaches to program analysis, which, together with other advances in the area, will reduce the cost of developing safe and reliable computer systems used in our daily life.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31