Project acronym 9 SALT
Project Reassessing Ninth Century Philosophy. A Synchronic Approach to the Logical Traditions
Researcher (PI) Christophe Florian Erismann
Host Institution (HI) UNIVERSITAT WIEN
Call Details Consolidator Grant (CoG), SH5, ERC-2014-CoG
Summary This project aims at a better understanding of the philosophical richness of ninth century thought using the unprecedented and highly innovative method of the synchronic approach. The hypothesis directing this synchronic approach is that studying together in parallel the four main philosophical traditions of the century – i.e. Latin, Greek, Syriac and Arabic – will bring results that the traditional enquiry limited to one tradition alone can never reach. This implies pioneering a new methodology to overcome the compartmentalization of research which prevails nowadays. Using this method is only possible because the four conditions of applicability – comparable intellectual environment, common text corpus, similar methodological perspective, commensurable problems – are fulfilled. The ninth century, a time of cultural renewal in the Carolingian, Byzantine and Abbasid empires, possesses the remarkable characteristic – which ensures commensurability – that the same texts, namely the writings of Aristotelian logic (mainly Porphyry’s Isagoge and Aristotle’s Categories) were read and commented upon in Latin, Greek, Syriac and Arabic alike.
Logic is fundamental to philosophical enquiry. The contested question is the human capacity to rationalise, analyse and describe the sensible reality, to understand the ontological structure of the world, and to define the types of entities which exist. The use of this unprecedented synchronic approach will allow us a deeper understanding of the positions, a clear identification of the a priori postulates of the philosophical debates, and a critical evaluation of the arguments used. It provides a unique opportunity to compare the different traditions and highlight the heritage which is common, to stress the specificities of each tradition when tackling philosophical issues and to discover the doctrinal results triggered by their mutual interactions, be they constructive (scholarly exchanges) or polemic (religious controversies).
Summary
This project aims at a better understanding of the philosophical richness of ninth century thought using the unprecedented and highly innovative method of the synchronic approach. The hypothesis directing this synchronic approach is that studying together in parallel the four main philosophical traditions of the century – i.e. Latin, Greek, Syriac and Arabic – will bring results that the traditional enquiry limited to one tradition alone can never reach. This implies pioneering a new methodology to overcome the compartmentalization of research which prevails nowadays. Using this method is only possible because the four conditions of applicability – comparable intellectual environment, common text corpus, similar methodological perspective, commensurable problems – are fulfilled. The ninth century, a time of cultural renewal in the Carolingian, Byzantine and Abbasid empires, possesses the remarkable characteristic – which ensures commensurability – that the same texts, namely the writings of Aristotelian logic (mainly Porphyry’s Isagoge and Aristotle’s Categories) were read and commented upon in Latin, Greek, Syriac and Arabic alike.
Logic is fundamental to philosophical enquiry. The contested question is the human capacity to rationalise, analyse and describe the sensible reality, to understand the ontological structure of the world, and to define the types of entities which exist. The use of this unprecedented synchronic approach will allow us a deeper understanding of the positions, a clear identification of the a priori postulates of the philosophical debates, and a critical evaluation of the arguments used. It provides a unique opportunity to compare the different traditions and highlight the heritage which is common, to stress the specificities of each tradition when tackling philosophical issues and to discover the doctrinal results triggered by their mutual interactions, be they constructive (scholarly exchanges) or polemic (religious controversies).
Max ERC Funding
1 998 566 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym AI4REASON
Project Artificial Intelligence for Large-Scale Computer-Assisted Reasoning
Researcher (PI) Josef Urban
Host Institution (HI) CESKE VYSOKE UCENI TECHNICKE V PRAZE
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary The goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends.
It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community.
The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.
Summary
The goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends.
It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community.
The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.
Max ERC Funding
1 499 500 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym AYURYOG
Project Medicine, Immortality, Moksha: Entangled Histories of Yoga, Ayurveda and Alchemy in South Asia
Researcher (PI) Dagmar Wujastyk
Host Institution (HI) UNIVERSITAT WIEN
Call Details Starting Grant (StG), SH6, ERC-2014-STG
Summary The project will examine the histories of yoga, ayurveda and rasashastra (Indian alchemy and iatrochemistry) from the tenth century to the present, focussing on the disciplines' health, rejuvenation and longevity practices. The goals of the project are to reveal the entanglements of these historical traditions, and to trace the trajectories of their evolution as components of today's global healthcare and personal development industries.
Our hypothesis is that practices aimed at achieving health, rejuvenation and longevity constitute a key area of exchange between the three disciplines, preparing the grounds for a series of important pharmaceutical and technological innovations and also profoundly influencing the discourses of today's medicalized forms of globalized yoga as well as of contemporary institutionalized forms of ayurveda and rasashastra.
Drawing upon the primary historical sources of each respective tradition as well as on fieldwork data, the research team will explore the shared terminology, praxis and theory of these three disciplines. We will examine why, when and how health, rejuvenation and longevity practices were employed; how each discipline’s discourse and practical applications relates to those of the others; and how past encounters and cross-fertilizations impact on contemporary health-related practices in yogic, ayurvedic and alchemists’ milieus.
The five-year project will be based at the Department of South Asian, Tibetan and Buddhist Studies at Vienna University and carried out by an international team of 3 post-doctoral researchers. The research will be grounded in the fields of South Asian studies and social history. An international workshop and an international conference will be organized to present and discuss the research results, which will also be published in peer-reviewed journals, an edited volume, and in individual monographs. A project website will provide open access to all research results.
Summary
The project will examine the histories of yoga, ayurveda and rasashastra (Indian alchemy and iatrochemistry) from the tenth century to the present, focussing on the disciplines' health, rejuvenation and longevity practices. The goals of the project are to reveal the entanglements of these historical traditions, and to trace the trajectories of their evolution as components of today's global healthcare and personal development industries.
Our hypothesis is that practices aimed at achieving health, rejuvenation and longevity constitute a key area of exchange between the three disciplines, preparing the grounds for a series of important pharmaceutical and technological innovations and also profoundly influencing the discourses of today's medicalized forms of globalized yoga as well as of contemporary institutionalized forms of ayurveda and rasashastra.
Drawing upon the primary historical sources of each respective tradition as well as on fieldwork data, the research team will explore the shared terminology, praxis and theory of these three disciplines. We will examine why, when and how health, rejuvenation and longevity practices were employed; how each discipline’s discourse and practical applications relates to those of the others; and how past encounters and cross-fertilizations impact on contemporary health-related practices in yogic, ayurvedic and alchemists’ milieus.
The five-year project will be based at the Department of South Asian, Tibetan and Buddhist Studies at Vienna University and carried out by an international team of 3 post-doctoral researchers. The research will be grounded in the fields of South Asian studies and social history. An international workshop and an international conference will be organized to present and discuss the research results, which will also be published in peer-reviewed journals, an edited volume, and in individual monographs. A project website will provide open access to all research results.
Max ERC Funding
1 416 146 €
Duration
Start date: 2015-06-01, End date: 2020-05-31
Project acronym Big Splash
Project Big Splash: Efficient Simulation of Natural Phenomena at Extremely Large Scales
Researcher (PI) Christopher John Wojtan
Host Institution (HI) Institute of Science and Technology Austria
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Summary
Computational simulations of natural phenomena are essential in science, engineering, product design, architecture, and computer graphics applications. However, despite progress in numerical algorithms and computational power, it is still unfeasible to compute detailed simulations at large scales. To make matters worse, important phenomena like turbulent splashing liquids and fracturing solids rely on delicate coupling between small-scale details and large-scale behavior. Brute-force computation of such phenomena is intractable, and current adaptive techniques are too fragile, too costly, or too crude to capture subtle instabilities at small scales. Increases in computational power and parallel algorithms will improve the situation, but progress will only be incremental until we address the problem at its source.
I propose two main approaches to this problem of efficiently simulating large-scale liquid and solid dynamics. My first avenue of research combines numerics and shape: I will investigate a careful de-coupling of dynamics from geometry, allowing essential shape details to be preserved and retrieved without wasting computation. I will also develop methods for merging small-scale analytical solutions with large-scale numerical algorithms. (These ideas show particular promise for phenomena like splashing liquids and fracturing solids, whose small-scale behaviors are poorly captured by standard finite element methods.) My second main research direction is the manipulation of large-scale simulation data: Given the redundant and parallel nature of physics computation, we will drastically speed up computation with novel dimension reduction and data compression approaches. We can also minimize unnecessary computation by re-using existing simulation data. The novel approaches resulting from this work will undoubtedly synergize to enable the simulation and understanding of complicated natural and biological processes that are presently unfeasible to compute.
Max ERC Funding
1 500 000 €
Duration
Start date: 2015-03-01, End date: 2020-02-29
Project acronym Con Espressione
Project Getting at the Heart of Things: Towards Expressivity-aware Computer Systems in Music
Researcher (PI) Gerhard Widmer
Host Institution (HI) UNIVERSITAT LINZ
Call Details Advanced Grant (AdG), PE6, ERC-2014-ADG
Summary What makes music so important, what can make a performance so special and stirring? It is the things the music expresses, the emotions it induces, the associations it evokes, the drama and characters it portrays. The sources of this expressivity are manifold: the music itself, its structure, orchestration, personal associations, social settings, but also – and very importantly – the act of performance, the interpretation and expressive intentions made explicit by the musicians through nuances in timing, dynamics etc.
Thanks to research in fields like Music Information Research (MIR), computers can do many useful things with music, from beat and rhythm detection to song identification and tracking. However, they are still far from grasping the essence of music: they cannot tell whether a performance expresses playfulness or ennui, solemnity or gaiety, determination or uncertainty; they cannot produce music with a desired expressive quality; they cannot interact with human musicians in a truly musical way, recognising and responding to the expressive intentions implied in their playing.
The project is about developing machines that are aware of certain dimensions of expressivity, specifically in the domain of (classical) music, where expressivity is both essential and – at least as far as it relates to the act of performance – can be traced back to well-defined and measurable parametric dimensions (such as timing, dynamics, articulation). We will develop systems that can recognise, characterise, search music by expressive aspects, generate, modify, and react to expressive qualities in music. To do so, we will (1) bring together the fields of AI, Machine Learning, MIR and Music Performance Research; (2) integrate theories from Musicology to build more well-founded models of music understanding; (3) support model learning and validation with massive musical corpora of a size and quality unprecedented in computational music research.
Summary
What makes music so important, what can make a performance so special and stirring? It is the things the music expresses, the emotions it induces, the associations it evokes, the drama and characters it portrays. The sources of this expressivity are manifold: the music itself, its structure, orchestration, personal associations, social settings, but also – and very importantly – the act of performance, the interpretation and expressive intentions made explicit by the musicians through nuances in timing, dynamics etc.
Thanks to research in fields like Music Information Research (MIR), computers can do many useful things with music, from beat and rhythm detection to song identification and tracking. However, they are still far from grasping the essence of music: they cannot tell whether a performance expresses playfulness or ennui, solemnity or gaiety, determination or uncertainty; they cannot produce music with a desired expressive quality; they cannot interact with human musicians in a truly musical way, recognising and responding to the expressive intentions implied in their playing.
The project is about developing machines that are aware of certain dimensions of expressivity, specifically in the domain of (classical) music, where expressivity is both essential and – at least as far as it relates to the act of performance – can be traced back to well-defined and measurable parametric dimensions (such as timing, dynamics, articulation). We will develop systems that can recognise, characterise, search music by expressive aspects, generate, modify, and react to expressive qualities in music. To do so, we will (1) bring together the fields of AI, Machine Learning, MIR and Music Performance Research; (2) integrate theories from Musicology to build more well-founded models of music understanding; (3) support model learning and validation with massive musical corpora of a size and quality unprecedented in computational music research.
Max ERC Funding
2 318 750 €
Duration
Start date: 2016-01-01, End date: 2021-12-31
Project acronym CrowdLand
Project Harnessing the power of crowdsourcing to improve land cover and land-use information
Researcher (PI) Steffen Martin Fritz
Host Institution (HI) INTERNATIONALES INSTITUT FUER ANGEWANDTE SYSTEMANALYSE
Call Details Consolidator Grant (CoG), SH3, ERC-2013-CoG
Summary Information about land cover, land use and the change over time is used for a wide range of applications such as nature protection and biodiversity, forest and water management, urban and transport planning, natural hazard prevention and mitigation, agricultural policies and monitoring climate change. Furthermore, high quality spatially explicit information on land cover change is an essential input variable to land use change modelling, which is increasingly being used to better understand the potential impact of certain policies. The amount of observed land cover change also serves as an important indicator of how well different regional, national and European policies have been implemented.
However, outside Europe and outside the developed world in particular, information on land cover and land cover change in poorer countries is hardly available and no national or regional dense sample based monitoring approaches such as LUCAS exists which deliver sufficiently accurate land cover and land cover change information. Moreover in particular in developing countries, there is no or very little information on land-use and crop management. Only very limited data available from FAO and an incomplete coverage of sub-national statistics (e.g. IFPRI) are available.
This research project will assess the potential of using crowdsourcing to close these big data gaps in developing and developed countries with a number of case studies and different data collection methods. The CrowdLand project will be carried out in two very different environments, i.e. Austria and Kenya.The overall research objectives of this project are to 1) test the potential of using social gaming to collect land use information 2) test the potential of using mobile money to collect data in developing countries 3) understand the data quality collected via crowdsourcing 4) apply advanced methods to filter crowdsourced data in order to attain improved accuracy.
Summary
Information about land cover, land use and the change over time is used for a wide range of applications such as nature protection and biodiversity, forest and water management, urban and transport planning, natural hazard prevention and mitigation, agricultural policies and monitoring climate change. Furthermore, high quality spatially explicit information on land cover change is an essential input variable to land use change modelling, which is increasingly being used to better understand the potential impact of certain policies. The amount of observed land cover change also serves as an important indicator of how well different regional, national and European policies have been implemented.
However, outside Europe and outside the developed world in particular, information on land cover and land cover change in poorer countries is hardly available and no national or regional dense sample based monitoring approaches such as LUCAS exists which deliver sufficiently accurate land cover and land cover change information. Moreover in particular in developing countries, there is no or very little information on land-use and crop management. Only very limited data available from FAO and an incomplete coverage of sub-national statistics (e.g. IFPRI) are available.
This research project will assess the potential of using crowdsourcing to close these big data gaps in developing and developed countries with a number of case studies and different data collection methods. The CrowdLand project will be carried out in two very different environments, i.e. Austria and Kenya.The overall research objectives of this project are to 1) test the potential of using social gaming to collect land use information 2) test the potential of using mobile money to collect data in developing countries 3) understand the data quality collected via crowdsourcing 4) apply advanced methods to filter crowdsourced data in order to attain improved accuracy.
Max ERC Funding
1 397 200 €
Duration
Start date: 2014-04-01, End date: 2019-03-31
Project acronym DecentLivingEnergy
Project Energy and emissions thresholds for providing decent living standards to all
Researcher (PI) Narasimha Desirazu Rao
Host Institution (HI) INTERNATIONALES INSTITUT FUER ANGEWANDTE SYSTEMANALYSE
Call Details Starting Grant (StG), SH3, ERC-2014-STG
Summary There is confusion surrounding how poverty eradication will contribute to climate change. This is due to knowledge gaps related to the material basis of poverty, and the relationship between energy and human development. Addressing this issue rigorously requires bridging gaps between global justice, economics, energy systems analysis, and industrial ecology, and applying this knowledge to projections of anthropogenic greenhouse gases. This project will develop a body of knowledge that quantifies the energy needs and related climate change impacts for providing decent living standards to all. The research will address three questions: which goods and services, and with what characteristics, constitute ‘decent living standards’? What energy resources are required to provide these goods and services in different countries, and what impact will this energy use have on climate change? How do the constituents of decent living and their energy needs evolve as countries develop? The first task will operationalize basic needs views of human development and advance their empirical validity by discerning characteristics of basic goods in household consumption patterns. The second will quantify the energy needs (and climate-related emissions) for decent living constituents and reveal their dependence on culture, climate, technology, and other contextual conditions in countries. This will be done using lifecycle analysis and input-output analysis, and mapping energy to climate change using state-of-the-art energy-economy integrated assessment modelling tools for 5 emerging economies that face the challenges of eradicating poverty and mitigating climate change. The third task will shed light on path dependencies and trends in the evolution of basic goods and their energy intensity using empirical analysis. This research will identify opportunities to shift developing societies towards low-carbon pathways, and help quantify burden-sharing arrangements for climate mitigation.
Summary
There is confusion surrounding how poverty eradication will contribute to climate change. This is due to knowledge gaps related to the material basis of poverty, and the relationship between energy and human development. Addressing this issue rigorously requires bridging gaps between global justice, economics, energy systems analysis, and industrial ecology, and applying this knowledge to projections of anthropogenic greenhouse gases. This project will develop a body of knowledge that quantifies the energy needs and related climate change impacts for providing decent living standards to all. The research will address three questions: which goods and services, and with what characteristics, constitute ‘decent living standards’? What energy resources are required to provide these goods and services in different countries, and what impact will this energy use have on climate change? How do the constituents of decent living and their energy needs evolve as countries develop? The first task will operationalize basic needs views of human development and advance their empirical validity by discerning characteristics of basic goods in household consumption patterns. The second will quantify the energy needs (and climate-related emissions) for decent living constituents and reveal their dependence on culture, climate, technology, and other contextual conditions in countries. This will be done using lifecycle analysis and input-output analysis, and mapping energy to climate change using state-of-the-art energy-economy integrated assessment modelling tools for 5 emerging economies that face the challenges of eradicating poverty and mitigating climate change. The third task will shed light on path dependencies and trends in the evolution of basic goods and their energy intensity using empirical analysis. This research will identify opportunities to shift developing societies towards low-carbon pathways, and help quantify burden-sharing arrangements for climate mitigation.
Max ERC Funding
869 722 €
Duration
Start date: 2015-06-01, End date: 2019-05-31
Project acronym DOiCV
Project Discrete Optimization in Computer Vision: Theory and Practice
Researcher (PI) Vladimir Kolmogorov
Host Institution (HI) INSTITUTE OF SCIENCE AND TECHNOLOGYAUSTRIA
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary This proposal aims at developing new inference algorithms for graphical models with discrete variables, with a focus on the MAP estimation task. MAP estimation algorithms such as graph cuts have transformed computer vision in the last decade; they are now routinely used and are also utilized in commercial systems.
Topics of this project fall into 3 categories.
Theoretically-oriented: Graph cut techniques come from combinatorial optimization. They can minimize a certain class of functions, namely submodular functions with unary and pairwise terms. Larger classes of functions can be minimized in polynomial time. A complete characterization of such classes has been established. They include k-submodular functions for an integer k _ 1.
I investigate whether such tools from discrete optimization can lead to more efficient inference algorithms for practical problems. I have already found an important application of k-submodular functions for minimizing Potts energy functions that are frequently used in computer vision. The concept of submodularity also recently appeared in the context of the task of computing marginals in graphical models, here discrete optimization tools could be used.
Practically-oriented: Modern techniques such as graph cuts and tree-reweighted message passing give excellent results for some graphical models such as with the Potts energies. However, they fail for more complicated models. I aim to develop new tools for tackling such hard energies. This will include exploring tighter convex relaxations of the problem.
Applications, sequence tagging problems: Recently, we developed new algorithms for inference in pattern-based Conditional Random Fields (CRFs) on a chain. This model can naturally be applied to sequence tagging problems; it generalizes the popular CRF model by giving it more flexibility. I will investigate (i) applications to specific tasks, such as the protein secondary structure prediction, and (ii) ways to extend the model.
Summary
This proposal aims at developing new inference algorithms for graphical models with discrete variables, with a focus on the MAP estimation task. MAP estimation algorithms such as graph cuts have transformed computer vision in the last decade; they are now routinely used and are also utilized in commercial systems.
Topics of this project fall into 3 categories.
Theoretically-oriented: Graph cut techniques come from combinatorial optimization. They can minimize a certain class of functions, namely submodular functions with unary and pairwise terms. Larger classes of functions can be minimized in polynomial time. A complete characterization of such classes has been established. They include k-submodular functions for an integer k _ 1.
I investigate whether such tools from discrete optimization can lead to more efficient inference algorithms for practical problems. I have already found an important application of k-submodular functions for minimizing Potts energy functions that are frequently used in computer vision. The concept of submodularity also recently appeared in the context of the task of computing marginals in graphical models, here discrete optimization tools could be used.
Practically-oriented: Modern techniques such as graph cuts and tree-reweighted message passing give excellent results for some graphical models such as with the Potts energies. However, they fail for more complicated models. I aim to develop new tools for tackling such hard energies. This will include exploring tighter convex relaxations of the problem.
Applications, sequence tagging problems: Recently, we developed new algorithms for inference in pattern-based Conditional Random Fields (CRFs) on a chain. This model can naturally be applied to sequence tagging problems; it generalizes the popular CRF model by giving it more flexibility. I will investigate (i) applications to specific tasks, such as the protein secondary structure prediction, and (ii) ways to extend the model.
Max ERC Funding
1 641 585 €
Duration
Start date: 2014-06-01, End date: 2019-05-31
Project acronym FEALORA
Project "Feasibility, logic and randomness in computational complexity"
Researcher (PI) Pavel Pudlák
Host Institution (HI) MATEMATICKY USTAV AV CR V.V.I.
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary "We will study fundamental problems in complexity theory using means developed in logic, specifically, in the filed of proof complexity. Since these problems seem extremely difficult and little progress has been achieved in solving them, we will prove results that will explain why they are so difficult and in which direction theory should be developed.
Our aim is to develop a system of conjectures based on the concepts of feasible incompleteness and pseudorandomness. Feasible incompleteness refers to conjectures about unprovability of statements concerning low complexity computations and about lengths of proofs of finite consistency statements. Essentially, they say that incompleteness in the finite domain behaves in a similar way as in the infinite. Several conjectures of this kind have been already stated. They have strong consequences concerning separation of complexity classes, but only a few special cases have been proved. We want to develop a unified system which will also include conjectures connecting feasible incompleteness with pseudorandomness. A major part of our work will concern proving special cases and relativized versions of these conjectures in order to provide evidence for their truth. We believe that the essence of the fundamental problems in complexity theory is logical, and thus developing theory in the way described above will eventually lead to their solution."
Summary
"We will study fundamental problems in complexity theory using means developed in logic, specifically, in the filed of proof complexity. Since these problems seem extremely difficult and little progress has been achieved in solving them, we will prove results that will explain why they are so difficult and in which direction theory should be developed.
Our aim is to develop a system of conjectures based on the concepts of feasible incompleteness and pseudorandomness. Feasible incompleteness refers to conjectures about unprovability of statements concerning low complexity computations and about lengths of proofs of finite consistency statements. Essentially, they say that incompleteness in the finite domain behaves in a similar way as in the infinite. Several conjectures of this kind have been already stated. They have strong consequences concerning separation of complexity classes, but only a few special cases have been proved. We want to develop a unified system which will also include conjectures connecting feasible incompleteness with pseudorandomness. A major part of our work will concern proving special cases and relativized versions of these conjectures in order to provide evidence for their truth. We believe that the essence of the fundamental problems in complexity theory is logical, and thus developing theory in the way described above will eventually lead to their solution."
Max ERC Funding
1 259 596 €
Duration
Start date: 2014-01-01, End date: 2018-12-31
Project acronym GRAPHALGAPP
Project Challenges in Graph Algorithms with Applications
Researcher (PI) Monika Hildegard Henzinger
Host Institution (HI) UNIVERSITAT WIEN
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary This project has two thrusts of equal importance. Firstly, it aims to develop new graph algorithmic techniques, specifically in the areas of dynamic graph algorithms, online algorithms and approximation algorithms for graph-based optimization problems. Thus, it proposes to solve long-standing, fundamental problems that are central to the field of algorithms. Secondly, it plans to apply these techniques to graph algorithmic problems in different fields of application, specifically in computer-aided verification, computational biology, and web-based advertisement with the goal of significantly advancing the state-of-the-art in these fields. This includes theoretical work as well as experimental evaluation on real-life data sets.
Thus, the goal of this project is a comprehensive approach to algorithms research which involves both excellent fundamental algorithms research as well as solving concrete applications.
Summary
This project has two thrusts of equal importance. Firstly, it aims to develop new graph algorithmic techniques, specifically in the areas of dynamic graph algorithms, online algorithms and approximation algorithms for graph-based optimization problems. Thus, it proposes to solve long-standing, fundamental problems that are central to the field of algorithms. Secondly, it plans to apply these techniques to graph algorithmic problems in different fields of application, specifically in computer-aided verification, computational biology, and web-based advertisement with the goal of significantly advancing the state-of-the-art in these fields. This includes theoretical work as well as experimental evaluation on real-life data sets.
Thus, the goal of this project is a comprehensive approach to algorithms research which involves both excellent fundamental algorithms research as well as solving concrete applications.
Max ERC Funding
2 428 258 €
Duration
Start date: 2014-03-01, End date: 2019-08-31