Project acronym ALPHA
Project Alpha Shape Theory Extended
Researcher (PI) Herbert Edelsbrunner
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Alpha shapes were invented in the early 80s of last century, and their implementation in three dimensions in the early 90s was at the forefront of the exact arithmetic paradigm that enabled fast and correct geometric software. In the late 90s, alpha shapes motivated the development of the wrap algorithm for surface reconstruction, and of persistent homology, which was the starting point of rapidly expanding interest in topological algorithms aimed at data analysis questions.
We now see alpha shapes, wrap complexes, and persistent homology as three aspects of a larger theory, which we propose to fully develop. This viewpoint was a long time coming and finds its clear expression within a generalized
version of discrete Morse theory. This unified framework offers new opportunities, including
(I) the adaptive reconstruction of shapes driven by the cavity structure;
(II) the stochastic analysis of all aspects of the theory;
(III) the computation of persistence of dense data, both in scale and in depth;
(IV) the study of long-range order in periodic and near-periodic point configurations.
These capabilities will significantly deepen as well as widen the theory and enable new applications in the sciences. To gain focus, we concentrate on low-dimensional applications in structural molecular biology and particle systems.
Summary
Alpha shapes were invented in the early 80s of last century, and their implementation in three dimensions in the early 90s was at the forefront of the exact arithmetic paradigm that enabled fast and correct geometric software. In the late 90s, alpha shapes motivated the development of the wrap algorithm for surface reconstruction, and of persistent homology, which was the starting point of rapidly expanding interest in topological algorithms aimed at data analysis questions.
We now see alpha shapes, wrap complexes, and persistent homology as three aspects of a larger theory, which we propose to fully develop. This viewpoint was a long time coming and finds its clear expression within a generalized
version of discrete Morse theory. This unified framework offers new opportunities, including
(I) the adaptive reconstruction of shapes driven by the cavity structure;
(II) the stochastic analysis of all aspects of the theory;
(III) the computation of persistence of dense data, both in scale and in depth;
(IV) the study of long-range order in periodic and near-periodic point configurations.
These capabilities will significantly deepen as well as widen the theory and enable new applications in the sciences. To gain focus, we concentrate on low-dimensional applications in structural molecular biology and particle systems.
Max ERC Funding
1 678 432 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym Browsec
Project Foundations and Tools for Client-Side Web Security
Researcher (PI) Matteo MAFFEI
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary The constantly increasing number of attacks on web applications shows how their rapid development has not been accompanied by adequate security foundations and demonstrates the lack of solid security enforcement tools. Indeed, web applications expose a gigantic attack surface, which hinders a rigorous understanding and enforcement of security properties. Hence, despite the worthwhile efforts to design secure web applications, users for a while will be confronted with vulnerable, or maliciously crafted, code. Unfortunately, end users have no way at present to reliably protect themselves from malicious applications.
BROWSEC will develop a holistic approach to client-side web security, laying its theoretical foundations and developing innovative security enforcement technologies. In particular, BROWSEC will deliver the first client-side tool to secure web applications that is practical, in that it is implemented as an extension and can thus be easily deployed at large, and also provably sound, i.e., backed up by machine-checked proofs that the tool provides end users with the required security guarantees. At the core of the proposal lies a novel monitoring technique, which treats the browser as a blackbox and intercepts its inputs and outputs in order to prevent dangerous information flows. With this lightweight monitoring approach, we aim at enforcing strong security properties without requiring any expensive and, given the dynamic nature of web applications, statically infeasible program analysis.
BROWSEC is thus a multidisciplinary research effort, promising practical impact and delivering breakthrough advancements in various disciplines, such as web security, JavaScript semantics, software engineering, and program verification.
Summary
The constantly increasing number of attacks on web applications shows how their rapid development has not been accompanied by adequate security foundations and demonstrates the lack of solid security enforcement tools. Indeed, web applications expose a gigantic attack surface, which hinders a rigorous understanding and enforcement of security properties. Hence, despite the worthwhile efforts to design secure web applications, users for a while will be confronted with vulnerable, or maliciously crafted, code. Unfortunately, end users have no way at present to reliably protect themselves from malicious applications.
BROWSEC will develop a holistic approach to client-side web security, laying its theoretical foundations and developing innovative security enforcement technologies. In particular, BROWSEC will deliver the first client-side tool to secure web applications that is practical, in that it is implemented as an extension and can thus be easily deployed at large, and also provably sound, i.e., backed up by machine-checked proofs that the tool provides end users with the required security guarantees. At the core of the proposal lies a novel monitoring technique, which treats the browser as a blackbox and intercepts its inputs and outputs in order to prevent dangerous information flows. With this lightweight monitoring approach, we aim at enforcing strong security properties without requiring any expensive and, given the dynamic nature of web applications, statically infeasible program analysis.
BROWSEC is thus a multidisciplinary research effort, promising practical impact and delivering breakthrough advancements in various disciplines, such as web security, JavaScript semantics, software engineering, and program verification.
Max ERC Funding
1 990 000 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym COMPLEX REASON
Project The Parameterized Complexity of Reasoning Problems
Researcher (PI) Stefan Szeider
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Summary
Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Max ERC Funding
1 421 130 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym CT
Project ‘Challenging Time(s)’ – A New Approach to Written Sources for Ancient Egyptian Chronology
Researcher (PI) Roman GUNDACKER
Host Institution (HI) OESTERREICHISCHE AKADEMIE DER WISSENSCHAFTEN
Call Details Starting Grant (StG), SH6, ERC-2017-STG
Summary The chronology of ancient Egypt is a golden thread for the memory of early civilisation. It is not only the scaffolding of four millennia of Egyptian history, but also one of the pillars of the chronology of the entire ancient Near East and eastern Mediterranean. The basic division of Egyptian history into 31 dynasties was introduced by Manetho, an Egyptian historian (c. 280 BC) writing in Greek for the Ptolemaic kings. Despite the fact that this scheme was adopted by Egyptologists 200 years ago and remains in use until today, there has never been an in-depth analysis of Manetho’s kinglist and of the names in it. Until now, identifying the Greek renderings of royal names with their hieroglyphic counterparts was more or less educated guesswork. It is thus essential to introduce the principles of textual criticism, to evaluate royal names on a firm linguistic basis and to provide for the first time ever an Egyptological commentary on Manetho’s kinglist. Just like Manetho did long ago, now it is necessary to gather all inscriptional evidence on Egyptian history: dated inscriptions, biographic and prosopographic data of royalty and commoners, genuine Egyptian kinglists and annals. These data must be critically evaluated in context, their assignment to specific reigns must be reconsidered, and genealogies and sequences of officials must be reviewed. The results are not only important for Egyptian historical chronology and for our understanding of the Egyptian perception of history, but also for the interpretation of chronological data gained from archaeological excavations (material culture) and sciences (14C dates, which are interpreted on the basis of historical chronology, e.g., via ‘Bayesian modelling’). The applicant has already shown the significance of this approach in pilot studies on the pyramid age. Further work in cooperation with international specialists will thus shed new light on ancient sources in order to determine the chronology of early civilisation.
Summary
The chronology of ancient Egypt is a golden thread for the memory of early civilisation. It is not only the scaffolding of four millennia of Egyptian history, but also one of the pillars of the chronology of the entire ancient Near East and eastern Mediterranean. The basic division of Egyptian history into 31 dynasties was introduced by Manetho, an Egyptian historian (c. 280 BC) writing in Greek for the Ptolemaic kings. Despite the fact that this scheme was adopted by Egyptologists 200 years ago and remains in use until today, there has never been an in-depth analysis of Manetho’s kinglist and of the names in it. Until now, identifying the Greek renderings of royal names with their hieroglyphic counterparts was more or less educated guesswork. It is thus essential to introduce the principles of textual criticism, to evaluate royal names on a firm linguistic basis and to provide for the first time ever an Egyptological commentary on Manetho’s kinglist. Just like Manetho did long ago, now it is necessary to gather all inscriptional evidence on Egyptian history: dated inscriptions, biographic and prosopographic data of royalty and commoners, genuine Egyptian kinglists and annals. These data must be critically evaluated in context, their assignment to specific reigns must be reconsidered, and genealogies and sequences of officials must be reviewed. The results are not only important for Egyptian historical chronology and for our understanding of the Egyptian perception of history, but also for the interpretation of chronological data gained from archaeological excavations (material culture) and sciences (14C dates, which are interpreted on the basis of historical chronology, e.g., via ‘Bayesian modelling’). The applicant has already shown the significance of this approach in pilot studies on the pyramid age. Further work in cooperation with international specialists will thus shed new light on ancient sources in order to determine the chronology of early civilisation.
Max ERC Funding
1 499 992 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym DOiCV
Project Discrete Optimization in Computer Vision: Theory and Practice
Researcher (PI) Vladimir Kolmogorov
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary This proposal aims at developing new inference algorithms for graphical models with discrete variables, with a focus on the MAP estimation task. MAP estimation algorithms such as graph cuts have transformed computer vision in the last decade; they are now routinely used and are also utilized in commercial systems.
Topics of this project fall into 3 categories.
Theoretically-oriented: Graph cut techniques come from combinatorial optimization. They can minimize a certain class of functions, namely submodular functions with unary and pairwise terms. Larger classes of functions can be minimized in polynomial time. A complete characterization of such classes has been established. They include k-submodular functions for an integer k _ 1.
I investigate whether such tools from discrete optimization can lead to more efficient inference algorithms for practical problems. I have already found an important application of k-submodular functions for minimizing Potts energy functions that are frequently used in computer vision. The concept of submodularity also recently appeared in the context of the task of computing marginals in graphical models, here discrete optimization tools could be used.
Practically-oriented: Modern techniques such as graph cuts and tree-reweighted message passing give excellent results for some graphical models such as with the Potts energies. However, they fail for more complicated models. I aim to develop new tools for tackling such hard energies. This will include exploring tighter convex relaxations of the problem.
Applications, sequence tagging problems: Recently, we developed new algorithms for inference in pattern-based Conditional Random Fields (CRFs) on a chain. This model can naturally be applied to sequence tagging problems; it generalizes the popular CRF model by giving it more flexibility. I will investigate (i) applications to specific tasks, such as the protein secondary structure prediction, and (ii) ways to extend the model.
Summary
This proposal aims at developing new inference algorithms for graphical models with discrete variables, with a focus on the MAP estimation task. MAP estimation algorithms such as graph cuts have transformed computer vision in the last decade; they are now routinely used and are also utilized in commercial systems.
Topics of this project fall into 3 categories.
Theoretically-oriented: Graph cut techniques come from combinatorial optimization. They can minimize a certain class of functions, namely submodular functions with unary and pairwise terms. Larger classes of functions can be minimized in polynomial time. A complete characterization of such classes has been established. They include k-submodular functions for an integer k _ 1.
I investigate whether such tools from discrete optimization can lead to more efficient inference algorithms for practical problems. I have already found an important application of k-submodular functions for minimizing Potts energy functions that are frequently used in computer vision. The concept of submodularity also recently appeared in the context of the task of computing marginals in graphical models, here discrete optimization tools could be used.
Practically-oriented: Modern techniques such as graph cuts and tree-reweighted message passing give excellent results for some graphical models such as with the Potts energies. However, they fail for more complicated models. I aim to develop new tools for tackling such hard energies. This will include exploring tighter convex relaxations of the problem.
Applications, sequence tagging problems: Recently, we developed new algorithms for inference in pattern-based Conditional Random Fields (CRFs) on a chain. This model can naturally be applied to sequence tagging problems; it generalizes the popular CRF model by giving it more flexibility. I will investigate (i) applications to specific tasks, such as the protein secondary structure prediction, and (ii) ways to extend the model.
Max ERC Funding
1 641 585 €
Duration
Start date: 2014-06-01, End date: 2019-05-31
Project acronym GRAPH GAMES
Project Quantitative Graph Games: Theory and Applications
Researcher (PI) Krishnendu Chatterjee
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary The theory of games played on graphs provides the mathematical foundations to study numerous important problems in branches of mathematics, economics, computer science, biology, and other fields. One key application area in computer science is the formal verification of reactive systems. The system is modeled as a graph, in which vertices of the graph represent states of the system, edges represent transitions, and paths represent behavior of the system. The verification of the system in an arbitrary environment is then studied as a problem of game played on the graph, where the players represent the different interacting agents. Traditionally, these games have been studied either with Boolean objectives, or single quantitative objectives. However, for the problem of verification of systems that must behave correctly in resource-constrained environments (such as an embedded system) both Boolean and quantitative objectives are necessary: the Boolean objective for correctness specification and quantitative objective for resource-constraints. Thus we need to generalize the theory of graph games such that the objectives can express combinations of quantitative and Boolean objectives. In this project, we will focus on the following research objectives for the study of graph games with quantitative objectives:
(1) develop the mathematical theory and algorithms for the new class of games on graphs obtained by combining quantitative and Boolean objectives;
(2) develop practical techniques (such as compositional and abstraction techniques) that allow our algorithmic solutions be implemented efficiently to handle large game graphs;
(3) explore new application areas to demonstrate the application of quantitative graph games in diverse disciplines; and
(4) develop the theory of games on graphs with infinite state space and with quantitative objectives.
since the theory of graph games is foundational in several disciplines, new algorithmic solutions are expected.
Summary
The theory of games played on graphs provides the mathematical foundations to study numerous important problems in branches of mathematics, economics, computer science, biology, and other fields. One key application area in computer science is the formal verification of reactive systems. The system is modeled as a graph, in which vertices of the graph represent states of the system, edges represent transitions, and paths represent behavior of the system. The verification of the system in an arbitrary environment is then studied as a problem of game played on the graph, where the players represent the different interacting agents. Traditionally, these games have been studied either with Boolean objectives, or single quantitative objectives. However, for the problem of verification of systems that must behave correctly in resource-constrained environments (such as an embedded system) both Boolean and quantitative objectives are necessary: the Boolean objective for correctness specification and quantitative objective for resource-constraints. Thus we need to generalize the theory of graph games such that the objectives can express combinations of quantitative and Boolean objectives. In this project, we will focus on the following research objectives for the study of graph games with quantitative objectives:
(1) develop the mathematical theory and algorithms for the new class of games on graphs obtained by combining quantitative and Boolean objectives;
(2) develop practical techniques (such as compositional and abstraction techniques) that allow our algorithmic solutions be implemented efficiently to handle large game graphs;
(3) explore new application areas to demonstrate the application of quantitative graph games in diverse disciplines; and
(4) develop the theory of games on graphs with infinite state space and with quantitative objectives.
since the theory of graph games is foundational in several disciplines, new algorithmic solutions are expected.
Max ERC Funding
1 163 111 €
Duration
Start date: 2011-12-01, End date: 2016-11-30
Project acronym GRAPHALGAPP
Project Challenges in Graph Algorithms with Applications
Researcher (PI) Monika Hildegard Henzinger
Host Institution (HI) UNIVERSITAT WIEN
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary This project has two thrusts of equal importance. Firstly, it aims to develop new graph algorithmic techniques, specifically in the areas of dynamic graph algorithms, online algorithms and approximation algorithms for graph-based optimization problems. Thus, it proposes to solve long-standing, fundamental problems that are central to the field of algorithms. Secondly, it plans to apply these techniques to graph algorithmic problems in different fields of application, specifically in computer-aided verification, computational biology, and web-based advertisement with the goal of significantly advancing the state-of-the-art in these fields. This includes theoretical work as well as experimental evaluation on real-life data sets.
Thus, the goal of this project is a comprehensive approach to algorithms research which involves both excellent fundamental algorithms research as well as solving concrete applications.
Summary
This project has two thrusts of equal importance. Firstly, it aims to develop new graph algorithmic techniques, specifically in the areas of dynamic graph algorithms, online algorithms and approximation algorithms for graph-based optimization problems. Thus, it proposes to solve long-standing, fundamental problems that are central to the field of algorithms. Secondly, it plans to apply these techniques to graph algorithmic problems in different fields of application, specifically in computer-aided verification, computational biology, and web-based advertisement with the goal of significantly advancing the state-of-the-art in these fields. This includes theoretical work as well as experimental evaluation on real-life data sets.
Thus, the goal of this project is a comprehensive approach to algorithms research which involves both excellent fundamental algorithms research as well as solving concrete applications.
Max ERC Funding
2 428 258 €
Duration
Start date: 2014-03-01, End date: 2019-08-31
Project acronym L3VISU
Project Life Long Learning for Visual Scene Understanding (L3ViSU)
Researcher (PI) Christoph Lampert
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "My goal in the project is to develop and analyze algorithms that use continuous, open-ended machine learning from visual input data (images and videos) in order to interpret visual scenes on a level comparable to humans.
L3ViSU is based on the hypothesis that we can only significantly improve the state of the art in computer vision algorithms by giving them access to background and contextual knowledge about the visual world, and that the most feasible way to obtain such knowledge is by extracting it (semi-) automatically from incoming visual stimuli. Consequently, at the core of L3ViSU lies the idea of life-long visual learning.
Sufficient data for such an effort is readily available, e.g. through digital TV-channels and media-
sharing Internet platforms, but the question of how to use these resources for building better computer vision systems is wide open. In L3ViSU we will rely on modern machine learning concepts, representing task-independent prior knowledge as prior distributions and function regularizers. This functional form allows them to help solving specific tasks by guiding the solution to ""reasonable"" ones, and to suppress mistakes that violate ""common sense"". The result will not only be improved prediction quality, but also a reduction in the amount of manual supervision necessary, and the possibility to introduce more semantics into computer vision, which has recently been identified as one of the major tasks for the next decade.
L3ViSU is a project on the interface between computer vision and machine learning. Solving it requires expertise in both areas, as it is represented in my research group at IST Austria. The life-long learning concepts developed within L3ViSU, however, will have impact outside of both areas, let it be as basis of life-long learning system with a different focus, such as in bioinformatics, or as a foundation for projects of commercial value, such as more intelligent driver assistance or video surveillance systems."
Summary
"My goal in the project is to develop and analyze algorithms that use continuous, open-ended machine learning from visual input data (images and videos) in order to interpret visual scenes on a level comparable to humans.
L3ViSU is based on the hypothesis that we can only significantly improve the state of the art in computer vision algorithms by giving them access to background and contextual knowledge about the visual world, and that the most feasible way to obtain such knowledge is by extracting it (semi-) automatically from incoming visual stimuli. Consequently, at the core of L3ViSU lies the idea of life-long visual learning.
Sufficient data for such an effort is readily available, e.g. through digital TV-channels and media-
sharing Internet platforms, but the question of how to use these resources for building better computer vision systems is wide open. In L3ViSU we will rely on modern machine learning concepts, representing task-independent prior knowledge as prior distributions and function regularizers. This functional form allows them to help solving specific tasks by guiding the solution to ""reasonable"" ones, and to suppress mistakes that violate ""common sense"". The result will not only be improved prediction quality, but also a reduction in the amount of manual supervision necessary, and the possibility to introduce more semantics into computer vision, which has recently been identified as one of the major tasks for the next decade.
L3ViSU is a project on the interface between computer vision and machine learning. Solving it requires expertise in both areas, as it is represented in my research group at IST Austria. The life-long learning concepts developed within L3ViSU, however, will have impact outside of both areas, let it be as basis of life-long learning system with a different focus, such as in bioinformatics, or as a foundation for projects of commercial value, such as more intelligent driver assistance or video surveillance systems."
Max ERC Funding
1 464 712 €
Duration
Start date: 2013-01-01, End date: 2018-06-30
Project acronym MATERIALIZABLE
Project MATERIALIZABLE: Intelligent fabrication-oriented Computational Design and Modeling
Researcher (PI) Bernd BICKEL
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary While access to 3D-printing technology becomes ubiquitous and provides revolutionary possibilities for fabricating complex, functional, multi-material objects with stunning properties, its potential impact is currently significantly limited due to the lack of efficient and intuitive methods for content creation. Existing tools are usually restricted to expert users, have been developed based on the capabilities of traditional manufacturing processes, and do not sufficiently take fabrication constraints into account. Scientifically, we are facing the fundamental challenge that existing simulation techniques and design approaches for predicting the physical properties of materials and objects at the resolution of modern 3D printers are too slow and do not scale with increasing object complexity. The problem is extremely challenging because real world-materials exhibit extraordinary variety and complexity.
To address these challenges, I suggest a novel computational approach that facilitates intuitive design, accurate and fast simulation techniques, and a functional representation of 3D content. I propose a multi-scale representation of functional goals and hybrid models that describes the physical behavior at a coarse scale and the relationship to the underlying material composition at the resolution of the 3D printer. My approach is to combine data-driven and physically-based modeling, providing both the required speed and accuracy through smart precomputations and tailored simulation techniques that operate on the data. A key aspect of this modeling and simulation approach is to identify domains that are sufficiently low-dimensional to be correctly sampled. Subsequently, I propose the fundamental re-thinking of the workflow, leading to solutions that allow synthesizing model instances optimized on-the-fly for a specific output device. The principal applicability will be evaluated for functional goals, such as appearance, deformation, and sensing capabilities.
Summary
While access to 3D-printing technology becomes ubiquitous and provides revolutionary possibilities for fabricating complex, functional, multi-material objects with stunning properties, its potential impact is currently significantly limited due to the lack of efficient and intuitive methods for content creation. Existing tools are usually restricted to expert users, have been developed based on the capabilities of traditional manufacturing processes, and do not sufficiently take fabrication constraints into account. Scientifically, we are facing the fundamental challenge that existing simulation techniques and design approaches for predicting the physical properties of materials and objects at the resolution of modern 3D printers are too slow and do not scale with increasing object complexity. The problem is extremely challenging because real world-materials exhibit extraordinary variety and complexity.
To address these challenges, I suggest a novel computational approach that facilitates intuitive design, accurate and fast simulation techniques, and a functional representation of 3D content. I propose a multi-scale representation of functional goals and hybrid models that describes the physical behavior at a coarse scale and the relationship to the underlying material composition at the resolution of the 3D printer. My approach is to combine data-driven and physically-based modeling, providing both the required speed and accuracy through smart precomputations and tailored simulation techniques that operate on the data. A key aspect of this modeling and simulation approach is to identify domains that are sufficiently low-dimensional to be correctly sampled. Subsequently, I propose the fundamental re-thinking of the workflow, leading to solutions that allow synthesizing model instances optimized on-the-fly for a specific output device. The principal applicability will be evaluated for functional goals, such as appearance, deformation, and sensing capabilities.
Max ERC Funding
1 497 730 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym N-T-AUTONOMY
Project Non-Territorial Autonomy as Minority Protection in Europe: An Intellectual and Political History of a Travelling Idea, 1850-2000
Researcher (PI) Börries KUZMANY
Host Institution (HI) OESTERREICHISCHE AKADEMIE DER WISSENSCHAFTEN
Call Details Starting Grant (StG), SH6, ERC-2017-STG
Summary Over the past 150 years, non-territorial autonomy has been one of three models for dealing with linguistic or ethnic minorities within several European states. Compared with the other two, i.e. the recognition of minority rights as individual rights and territorial self-rule, non-territorial autonomy has received little attention. This project proposes to write the first history of non-territorial autonomy as an applied policy tool in minority protection and as an intellectual concept with a chequered history across Europe. Intellectuals, politicians, and legal scholars across the political spectrum from the far left to the far right supported this idea, although they were aware of the risks of strengthening national differences by promoting such a collective approach to minority protection. The project explores how this idea of granting cultural rights to a national group as a corporate body within a state, as a means of integrating diverse nationalities, travelled and transformed throughout the Habsburg Empire from 1850 to the present. We propose to 1) trace the development/circulation of theoretical conceptions and political applications of non-territorial autonomy within the Habsburg Empire, by mapping the networks of scholars as well as politicians who advocated for it; 2) explain the continuities in the development of the idea, and its manifestations in policies adopted by interwar Central and Eastern European nation states, where communists, socialists, liberals and fascists alike were able to translate elements of non-territorial autonomy into their ideologies and programs; 3) analyse the treatment of non-territorial autonomy, which was advocated by minority lobby groups, in international minority protection in the 20th century despite strong opposition to practices based on it by international organisations. We rely on a mixture of historiographical methods developed in nationalism studies to analyse the idea’s translation in entangled transnational spaces.
Summary
Over the past 150 years, non-territorial autonomy has been one of three models for dealing with linguistic or ethnic minorities within several European states. Compared with the other two, i.e. the recognition of minority rights as individual rights and territorial self-rule, non-territorial autonomy has received little attention. This project proposes to write the first history of non-territorial autonomy as an applied policy tool in minority protection and as an intellectual concept with a chequered history across Europe. Intellectuals, politicians, and legal scholars across the political spectrum from the far left to the far right supported this idea, although they were aware of the risks of strengthening national differences by promoting such a collective approach to minority protection. The project explores how this idea of granting cultural rights to a national group as a corporate body within a state, as a means of integrating diverse nationalities, travelled and transformed throughout the Habsburg Empire from 1850 to the present. We propose to 1) trace the development/circulation of theoretical conceptions and political applications of non-territorial autonomy within the Habsburg Empire, by mapping the networks of scholars as well as politicians who advocated for it; 2) explain the continuities in the development of the idea, and its manifestations in policies adopted by interwar Central and Eastern European nation states, where communists, socialists, liberals and fascists alike were able to translate elements of non-territorial autonomy into their ideologies and programs; 3) analyse the treatment of non-territorial autonomy, which was advocated by minority lobby groups, in international minority protection in the 20th century despite strong opposition to practices based on it by international organisations. We rely on a mixture of historiographical methods developed in nationalism studies to analyse the idea’s translation in entangled transnational spaces.
Max ERC Funding
1 499 556 €
Duration
Start date: 2018-04-01, End date: 2023-03-31