Project acronym 3D Reloaded
Project 3D Reloaded: Novel Algorithms for 3D Shape Inference and Analysis
Researcher (PI) Daniel Cremers
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Summary
Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Max ERC Funding
2 000 000 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym 4DRepLy
Project Closing the 4D Real World Reconstruction Loop
Researcher (PI) Christian THEOBALT
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary 4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Summary
4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Max ERC Funding
1 977 000 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym 5D Heart Patch
Project A Functional, Mature In vivo Human Ventricular Muscle Patch for Cardiomyopathy
Researcher (PI) Kenneth Randall Chien
Host Institution (HI) KAROLINSKA INSTITUTET
Call Details Advanced Grant (AdG), LS7, ERC-2016-ADG
Summary Developing new therapeutic strategies for heart regeneration is a major goal for cardiac biology and medicine. While cardiomyocytes can be generated from human pluripotent stem (hPSC) cells in vitro, it has proven difficult to use these cells to generate a large scale, mature human heart ventricular muscle graft on the injured heart in vivo. The central objective of this proposal is to optimize the generation of a large-scale pure, fully functional human ventricular muscle patch in vivo through the self-assembly of purified human ventricular progenitors and the localized expression of defined paracrine factors that drive their expansion, differentiation, vascularization, matrix formation, and maturation. Recently, we have found that purified hPSC-derived ventricular progenitors (HVPs) can self-assemble in vivo on the epicardial surface into a 3D vascularized, and functional ventricular patch with its own extracellular matrix via a cell autonomous pathway. A two-step protocol and FACS purification of HVP receptors can generate billions of pure HVPs- The current proposal will lead to the identification of defined paracrine pathways to enhance the survival, grafting/implantation, expansion, differentiation, matrix formation, vascularization and maturation of the graft in vivo. We will captalize on our unique HVP system and our novel modRNA technology to deliver therapeutic strategies by using the in vivo human ventricular muscle to model in vivo arrhythmogenic cardiomyopathy, and optimize the ability of the graft to compensate for the massive loss of functional muscle during ischemic cardiomyopathy and post-myocardial infarction. The studies will lead to new in vivo chimeric models of human cardiac disease and an experimental paradigm to optimize organ-on-organ cardiac tissue engineers of an in vivo, functional mature ventricular patch for cardiomyopathy
Summary
Developing new therapeutic strategies for heart regeneration is a major goal for cardiac biology and medicine. While cardiomyocytes can be generated from human pluripotent stem (hPSC) cells in vitro, it has proven difficult to use these cells to generate a large scale, mature human heart ventricular muscle graft on the injured heart in vivo. The central objective of this proposal is to optimize the generation of a large-scale pure, fully functional human ventricular muscle patch in vivo through the self-assembly of purified human ventricular progenitors and the localized expression of defined paracrine factors that drive their expansion, differentiation, vascularization, matrix formation, and maturation. Recently, we have found that purified hPSC-derived ventricular progenitors (HVPs) can self-assemble in vivo on the epicardial surface into a 3D vascularized, and functional ventricular patch with its own extracellular matrix via a cell autonomous pathway. A two-step protocol and FACS purification of HVP receptors can generate billions of pure HVPs- The current proposal will lead to the identification of defined paracrine pathways to enhance the survival, grafting/implantation, expansion, differentiation, matrix formation, vascularization and maturation of the graft in vivo. We will captalize on our unique HVP system and our novel modRNA technology to deliver therapeutic strategies by using the in vivo human ventricular muscle to model in vivo arrhythmogenic cardiomyopathy, and optimize the ability of the graft to compensate for the massive loss of functional muscle during ischemic cardiomyopathy and post-myocardial infarction. The studies will lead to new in vivo chimeric models of human cardiac disease and an experimental paradigm to optimize organ-on-organ cardiac tissue engineers of an in vivo, functional mature ventricular patch for cardiomyopathy
Max ERC Funding
2 149 228 €
Duration
Start date: 2017-12-01, End date: 2022-11-30
Project acronym ACCORD
Project Algorithms for Complex Collective Decisions on Structured Domains
Researcher (PI) Edith Elkind
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Algorithms for Complex Collective Decisions on Structured Domains.
The aim of this proposal is to substantially advance the field of Computational Social Choice, by developing new tools and methodologies that can be used for making complex group decisions in rich and structured environments. We consider settings where each member of a decision-making body has preferences over a finite set of alternatives, and the goal is to synthesise a collective preference over these alternatives, which may take the form of a partial order over the set of alternatives with a predefined structure: examples include selecting a fixed-size set of alternatives, a ranking of the alternatives, a winner and up to two runner-ups, etc. We will formulate desiderata that apply to such preference aggregation procedures, design specific procedures that satisfy as many of these desiderata as possible, and develop efficient algorithms for computing them. As the latter step may be infeasible on general preference domains, we will focus on identifying the least restrictive domains that enable efficient computation, and use real-life preference data to verify whether the associated restrictions are likely to be satisfied in realistic preference aggregation scenarios. Also, we will determine whether our preference aggregation procedures are computationally resistant to malicious behavior. To lower the cognitive burden on the decision-makers, we will extend our procedures to accept partial rankings as inputs. Finally, to further contribute towards bridging the gap between theory and practice of collective decision making, we will provide open-source software implementations of our procedures, and reach out to the potential users to obtain feedback on their practical applicability.
Summary
Algorithms for Complex Collective Decisions on Structured Domains.
The aim of this proposal is to substantially advance the field of Computational Social Choice, by developing new tools and methodologies that can be used for making complex group decisions in rich and structured environments. We consider settings where each member of a decision-making body has preferences over a finite set of alternatives, and the goal is to synthesise a collective preference over these alternatives, which may take the form of a partial order over the set of alternatives with a predefined structure: examples include selecting a fixed-size set of alternatives, a ranking of the alternatives, a winner and up to two runner-ups, etc. We will formulate desiderata that apply to such preference aggregation procedures, design specific procedures that satisfy as many of these desiderata as possible, and develop efficient algorithms for computing them. As the latter step may be infeasible on general preference domains, we will focus on identifying the least restrictive domains that enable efficient computation, and use real-life preference data to verify whether the associated restrictions are likely to be satisfied in realistic preference aggregation scenarios. Also, we will determine whether our preference aggregation procedures are computationally resistant to malicious behavior. To lower the cognitive burden on the decision-makers, we will extend our procedures to accept partial rankings as inputs. Finally, to further contribute towards bridging the gap between theory and practice of collective decision making, we will provide open-source software implementations of our procedures, and reach out to the potential users to obtain feedback on their practical applicability.
Max ERC Funding
1 395 933 €
Duration
Start date: 2015-07-01, End date: 2020-06-30
Project acronym ACDC
Project Algorithms and Complexity of Highly Decentralized Computations
Researcher (PI) Fabian Daniel Kuhn
Host Institution (HI) ALBERT-LUDWIGS-UNIVERSITAET FREIBURG
Call Details Starting Grant (StG), PE6, ERC-2013-StG
Summary "Many of today's and tomorrow's computer systems are built on top of large-scale networks such as, e.g., the Internet, the world wide web, wireless ad hoc and sensor networks, or peer-to-peer networks. Driven by technological advances, new kinds of networks and applications have become possible and we can safely assume that this trend is going to continue. Often modern systems are envisioned to consist of a potentially large number of individual components that are organized in a completely decentralized way. There is no central authority that controls the topology of the network, how nodes join or leave the system, or in which way nodes communicate with each other. Also, many future distributed applications will be built using wireless devices that communicate via radio.
The general objective of the proposed project is to improve our understanding of the algorithmic and theoretical foundations of decentralized distributed systems. From an algorithmic point of view, decentralized networks and computations pose a number of fascinating and unique challenges that are not present in sequential or more standard distributed systems. As communication is limited and mostly between nearby nodes, each node of a large network can only maintain a very restricted view of the global state of the system. This is particularly true if the network can change dynamically, either by nodes joining or leaving the system or if the topology changes over time, e.g., because of the mobility of the devices in case of a wireless network. Nevertheless, the nodes of a network need to coordinate in order to achieve some global goal.
In particular, we plan to study algorithms and lower bounds for basic computation and information dissemination tasks in such systems. In addition, we are particularly interested in the complexity of distributed computations in dynamic and wireless networks."
Summary
"Many of today's and tomorrow's computer systems are built on top of large-scale networks such as, e.g., the Internet, the world wide web, wireless ad hoc and sensor networks, or peer-to-peer networks. Driven by technological advances, new kinds of networks and applications have become possible and we can safely assume that this trend is going to continue. Often modern systems are envisioned to consist of a potentially large number of individual components that are organized in a completely decentralized way. There is no central authority that controls the topology of the network, how nodes join or leave the system, or in which way nodes communicate with each other. Also, many future distributed applications will be built using wireless devices that communicate via radio.
The general objective of the proposed project is to improve our understanding of the algorithmic and theoretical foundations of decentralized distributed systems. From an algorithmic point of view, decentralized networks and computations pose a number of fascinating and unique challenges that are not present in sequential or more standard distributed systems. As communication is limited and mostly between nearby nodes, each node of a large network can only maintain a very restricted view of the global state of the system. This is particularly true if the network can change dynamically, either by nodes joining or leaving the system or if the topology changes over time, e.g., because of the mobility of the devices in case of a wireless network. Nevertheless, the nodes of a network need to coordinate in order to achieve some global goal.
In particular, we plan to study algorithms and lower bounds for basic computation and information dissemination tasks in such systems. In addition, we are particularly interested in the complexity of distributed computations in dynamic and wireless networks."
Max ERC Funding
1 148 000 €
Duration
Start date: 2013-11-01, End date: 2018-10-31
Project acronym ACROSS
Project 3D Reconstruction and Modeling across Different Levels of Abstraction
Researcher (PI) Leif Kobbelt
Host Institution (HI) RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary "Digital 3D models are gaining more and more importance in diverse application fields ranging from computer graphics, multimedia and simulation sciences to engineering, architecture, and medicine. Powerful technologies to digitize the 3D shape of real objects and scenes are becoming available even to consumers. However, the raw geometric data emerging from, e.g., 3D scanning or multi-view stereo often lacks a consistent structure and meta-information which are necessary for the effective deployment of such models in sophisticated down-stream applications like animation, simulation, or CAD/CAM that go beyond mere visualization. Our goal is to develop new fundamental algorithms which transform raw geometric input data into augmented 3D models that are equipped with structural meta information such as feature aligned meshes, patch segmentations, local and global geometric constraints, statistical shape variation data, or even procedural descriptions. Our methodological approach is inspired by the human perceptual system that integrates bottom-up (data-driven) and top-down (model-driven) mechanisms in its hierarchical processing. Similarly we combine algorithms operating on different levels of abstraction into reconstruction and modeling networks. Instead of developing an individual solution for each specific application scenario, we create an eco-system of algorithms for automatic processing and interactive design of highly complex 3D models. A key concept is the information flow across all levels of abstraction in a bottom-up as well as top-down fashion. We not only aim at optimizing geometric representations but in fact at bridging the gap between reconstruction and recognition of geometric objects. The results from this project will make it possible to bring 3D models of real world objects into many highly relevant applications in science, industry, and entertainment, greatly reducing the excessive manual effort that is still necessary today."
Summary
"Digital 3D models are gaining more and more importance in diverse application fields ranging from computer graphics, multimedia and simulation sciences to engineering, architecture, and medicine. Powerful technologies to digitize the 3D shape of real objects and scenes are becoming available even to consumers. However, the raw geometric data emerging from, e.g., 3D scanning or multi-view stereo often lacks a consistent structure and meta-information which are necessary for the effective deployment of such models in sophisticated down-stream applications like animation, simulation, or CAD/CAM that go beyond mere visualization. Our goal is to develop new fundamental algorithms which transform raw geometric input data into augmented 3D models that are equipped with structural meta information such as feature aligned meshes, patch segmentations, local and global geometric constraints, statistical shape variation data, or even procedural descriptions. Our methodological approach is inspired by the human perceptual system that integrates bottom-up (data-driven) and top-down (model-driven) mechanisms in its hierarchical processing. Similarly we combine algorithms operating on different levels of abstraction into reconstruction and modeling networks. Instead of developing an individual solution for each specific application scenario, we create an eco-system of algorithms for automatic processing and interactive design of highly complex 3D models. A key concept is the information flow across all levels of abstraction in a bottom-up as well as top-down fashion. We not only aim at optimizing geometric representations but in fact at bridging the gap between reconstruction and recognition of geometric objects. The results from this project will make it possible to bring 3D models of real world objects into many highly relevant applications in science, industry, and entertainment, greatly reducing the excessive manual effort that is still necessary today."
Max ERC Funding
2 482 000 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym ALCOHOLLIFECOURSE
Project Alcohol Consumption across the Life-course: Determinants and Consequences
Researcher (PI) Anne Rebecca Britton
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), LS7, ERC-2012-StG_20111109
Summary The epidemiology of alcohol use and related health consequences plays a vital role by monitoring populations’ alcohol consumption patterns and problems associated with drinking. Such studies seek to explain mechanisms linking consumption to harm and ultimately to reduce the health burden. Research needs to consider changes in drinking behaviour over the life-course. The current evidence base lacks the consideration of the complexity of lifetime consumption patterns, the predictors of change and subsequent health risks.
Aims of the study
1. To describe age-related trajectories of drinking in different settings and to determine the extent to which individual and social contextual factors, including socioeconomic position, social networks and life events influence drinking pattern trajectories.
2. To estimate the impact of drinking trajectories on physical functioning and disease and to disentangle the exposure-outcome associations in terms of a) timing, i.e. health effect of drinking patterns in early, mid and late life; and b) duration, i.e. whether the impact of drinking accumulates over time.
3. To test the bidirectional associations between health and changes in consumption over the life-course in order to estimate the relative importance of these effects and to determine the dominant temporal direction.
4. To explore mechanisms and pathways through which drinking trajectories affect health and functioning in later life and to examine the role played by potential effect modifiers of the association between drinking and poor health.
Several large, longitudinal cohort studies from European countries with repeated measures of alcohol consumption will be combined and analysed to address the aims. A new team will be formed consisting of the PI, a Research Associate and two PhD students. Dissemination will be through journals, conferences, and culminating in a one-day workshop for academics, practitioners and policy makers in the alcohol field.
Summary
The epidemiology of alcohol use and related health consequences plays a vital role by monitoring populations’ alcohol consumption patterns and problems associated with drinking. Such studies seek to explain mechanisms linking consumption to harm and ultimately to reduce the health burden. Research needs to consider changes in drinking behaviour over the life-course. The current evidence base lacks the consideration of the complexity of lifetime consumption patterns, the predictors of change and subsequent health risks.
Aims of the study
1. To describe age-related trajectories of drinking in different settings and to determine the extent to which individual and social contextual factors, including socioeconomic position, social networks and life events influence drinking pattern trajectories.
2. To estimate the impact of drinking trajectories on physical functioning and disease and to disentangle the exposure-outcome associations in terms of a) timing, i.e. health effect of drinking patterns in early, mid and late life; and b) duration, i.e. whether the impact of drinking accumulates over time.
3. To test the bidirectional associations between health and changes in consumption over the life-course in order to estimate the relative importance of these effects and to determine the dominant temporal direction.
4. To explore mechanisms and pathways through which drinking trajectories affect health and functioning in later life and to examine the role played by potential effect modifiers of the association between drinking and poor health.
Several large, longitudinal cohort studies from European countries with repeated measures of alcohol consumption will be combined and analysed to address the aims. A new team will be formed consisting of the PI, a Research Associate and two PhD students. Dissemination will be through journals, conferences, and culminating in a one-day workshop for academics, practitioners and policy makers in the alcohol field.
Max ERC Funding
1 032 815 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym ALEXANDRIA
Project "Foundations for Temporal Retrieval, Exploration and Analytics in Web Archives"
Researcher (PI) Wolfgang Nejdl
Host Institution (HI) GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVER
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary "Significant parts of our cultural heritage are produced on the Web, yet only insufficient opportunities exist for accessing and exploring the past of the Web. The ALEXANDRIA project aims to develop models, tools and techniques necessary to archive and index relevant parts of the Web, and to retrieve and explore this information in a meaningful way. While the easy accessibility to the current Web is a good baseline, optimal access to Web archives requires new models and algorithms for retrieval, exploration, and analytics which go far beyond what is needed to access the current state of the Web. This includes taking into account the unique temporal dimension of Web archives, structured semantic information already available on the Web, as well as social media and network information.
Within ALEXANDRIA, we will significantly advance semantic and time-based indexing for Web archives using human-compiled knowledge available on the Web, to efficiently index, retrieve and explore information about entities and events from the past. In doing so, we will focus on the concurrent evolution of this knowledge and the Web content to be indexed, and take into account diversity and incompleteness of this knowledge. We will further investigate mixed crowd- and machine-based Web analytics to support long- running and collaborative retrieval and analysis processes on Web archives. Usage of implicit human feedback will be essential to provide better indexing through insights during the analysis process and to better focus harvesting of content.
The ALEXANDRIA Testbed will provide an important context for research, exploration and evaluation of the concepts, methods and algorithms developed in this project, and will provide both relevant collections and algorithms that enable further research on and practical application of our research results to existing archives like the Internet Archive, the Internet Memory Foundation and Web archives maintained by European national libraries."
Summary
"Significant parts of our cultural heritage are produced on the Web, yet only insufficient opportunities exist for accessing and exploring the past of the Web. The ALEXANDRIA project aims to develop models, tools and techniques necessary to archive and index relevant parts of the Web, and to retrieve and explore this information in a meaningful way. While the easy accessibility to the current Web is a good baseline, optimal access to Web archives requires new models and algorithms for retrieval, exploration, and analytics which go far beyond what is needed to access the current state of the Web. This includes taking into account the unique temporal dimension of Web archives, structured semantic information already available on the Web, as well as social media and network information.
Within ALEXANDRIA, we will significantly advance semantic and time-based indexing for Web archives using human-compiled knowledge available on the Web, to efficiently index, retrieve and explore information about entities and events from the past. In doing so, we will focus on the concurrent evolution of this knowledge and the Web content to be indexed, and take into account diversity and incompleteness of this knowledge. We will further investigate mixed crowd- and machine-based Web analytics to support long- running and collaborative retrieval and analysis processes on Web archives. Usage of implicit human feedback will be essential to provide better indexing through insights during the analysis process and to better focus harvesting of content.
The ALEXANDRIA Testbed will provide an important context for research, exploration and evaluation of the concepts, methods and algorithms developed in this project, and will provide both relevant collections and algorithms that enable further research on and practical application of our research results to existing archives like the Internet Archive, the Internet Memory Foundation and Web archives maintained by European national libraries."
Max ERC Funding
2 493 600 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym ALEXANDRIA
Project Large-Scale Formal Proof for the Working Mathematician
Researcher (PI) Lawrence PAULSON
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Mathematical proofs have always been prone to error. Today, proofs can be hundreds of pages long and combine results from many specialisms, making them almost impossible to check. One solution is to deploy modern verification technology. Interactive theorem provers have demonstrated their potential as vehicles for formalising mathematics through achievements such as the verification of the Kepler Conjecture. Proofs done using such tools reach a high standard of correctness.
However, existing theorem provers are unsuitable for mathematics. Their formal proofs are unreadable. They struggle to do simple tasks, such as evaluating limits. They lack much basic mathematics, and the material they do have is difficult to locate and apply.
ALEXANDRIA will create a proof development environment attractive to working mathematicians, utilising the best technology available across computer science. Its focus will be the management and use of large-scale mathematical knowledge, both theorems and algorithms. The project will employ mathematicians to investigate the formalisation of mathematics in practice. Our already substantial formalised libraries will serve as the starting point. They will be extended and annotated to support sophisticated searches. Techniques will be borrowed from machine learning, information retrieval and natural language processing. Algorithms will be treated similarly: ALEXANDRIA will help users find and invoke the proof methods and algorithms appropriate for the task.
ALEXANDRIA will provide (1) comprehensive formal mathematical libraries; (2) search within libraries, and the mining of libraries for proof patterns; (3) automated support for the construction of large formal proofs; (4) sound and practical computer algebra tools.
ALEXANDRIA will be based on legible structured proofs. Formal proofs should be not mere code, but a machine-checkable form of communication between mathematicians.
Summary
Mathematical proofs have always been prone to error. Today, proofs can be hundreds of pages long and combine results from many specialisms, making them almost impossible to check. One solution is to deploy modern verification technology. Interactive theorem provers have demonstrated their potential as vehicles for formalising mathematics through achievements such as the verification of the Kepler Conjecture. Proofs done using such tools reach a high standard of correctness.
However, existing theorem provers are unsuitable for mathematics. Their formal proofs are unreadable. They struggle to do simple tasks, such as evaluating limits. They lack much basic mathematics, and the material they do have is difficult to locate and apply.
ALEXANDRIA will create a proof development environment attractive to working mathematicians, utilising the best technology available across computer science. Its focus will be the management and use of large-scale mathematical knowledge, both theorems and algorithms. The project will employ mathematicians to investigate the formalisation of mathematics in practice. Our already substantial formalised libraries will serve as the starting point. They will be extended and annotated to support sophisticated searches. Techniques will be borrowed from machine learning, information retrieval and natural language processing. Algorithms will be treated similarly: ALEXANDRIA will help users find and invoke the proof methods and algorithms appropriate for the task.
ALEXANDRIA will provide (1) comprehensive formal mathematical libraries; (2) search within libraries, and the mining of libraries for proof patterns; (3) automated support for the construction of large formal proofs; (4) sound and practical computer algebra tools.
ALEXANDRIA will be based on legible structured proofs. Formal proofs should be not mere code, but a machine-checkable form of communication between mathematicians.
Max ERC Funding
2 430 140 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym ALGAME
Project Algorithms, Games, Mechanisms, and the Price of Anarchy
Researcher (PI) Elias Koutsoupias
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Summary
The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Max ERC Funding
2 461 000 €
Duration
Start date: 2013-04-01, End date: 2019-03-31