Project acronym 3D Reloaded
Project 3D Reloaded: Novel Algorithms for 3D Shape Inference and Analysis
Researcher (PI) Daniel Cremers
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Summary
Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Max ERC Funding
2 000 000 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym 3DCellPhase-
Project In situ Structural Analysis of Molecular Crowding and Phase Separation
Researcher (PI) Julia MAHAMID
Host Institution (HI) EUROPEAN MOLECULAR BIOLOGY LABORATORY
Call Details Starting Grant (StG), LS1, ERC-2017-STG
Summary This proposal brings together two fields in biology, namely the emerging field of phase-separated assemblies in cell biology and state-of-the-art cellular cryo-electron tomography, to advance our understanding on a fundamental, yet illusive, question: the molecular organization of the cytoplasm.
Eukaryotes organize their biochemical reactions into functionally distinct compartments. Intriguingly, many, if not most, cellular compartments are not membrane enclosed. Rather, they assemble dynamically by phase separation, typically triggered upon a specific event. Despite significant progress on reconstituting such liquid-like assemblies in vitro, we lack information as to whether these compartments in vivo are indeed amorphous liquids, or whether they exhibit structural features such as gels or fibers. My recent work on sample preparation of cells for cryo-electron tomography, including cryo-focused ion beam thinning, guided by 3D correlative fluorescence microscopy, shows that we can now prepare site-specific ‘electron-transparent windows’ in suitable eukaryotic systems, which allow direct examination of structural features of cellular compartments in their cellular context. Here, we will use these techniques to elucidate the structural principles and cytoplasmic environment driving the dynamic assembly of two phase-separated compartments: Stress granules, which are RNA bodies that form rapidly in the cytoplasm upon cellular stress, and centrosomes, which are sites of microtubule nucleation. We will combine these studies with a quantitative description of the crowded nature of cytoplasm and of its local variations, to provide a direct readout of the impact of excluded volume on molecular assembly in living cells. Taken together, these studies will provide fundamental insights into the structural basis by which cells form biochemical compartments.
Summary
This proposal brings together two fields in biology, namely the emerging field of phase-separated assemblies in cell biology and state-of-the-art cellular cryo-electron tomography, to advance our understanding on a fundamental, yet illusive, question: the molecular organization of the cytoplasm.
Eukaryotes organize their biochemical reactions into functionally distinct compartments. Intriguingly, many, if not most, cellular compartments are not membrane enclosed. Rather, they assemble dynamically by phase separation, typically triggered upon a specific event. Despite significant progress on reconstituting such liquid-like assemblies in vitro, we lack information as to whether these compartments in vivo are indeed amorphous liquids, or whether they exhibit structural features such as gels or fibers. My recent work on sample preparation of cells for cryo-electron tomography, including cryo-focused ion beam thinning, guided by 3D correlative fluorescence microscopy, shows that we can now prepare site-specific ‘electron-transparent windows’ in suitable eukaryotic systems, which allow direct examination of structural features of cellular compartments in their cellular context. Here, we will use these techniques to elucidate the structural principles and cytoplasmic environment driving the dynamic assembly of two phase-separated compartments: Stress granules, which are RNA bodies that form rapidly in the cytoplasm upon cellular stress, and centrosomes, which are sites of microtubule nucleation. We will combine these studies with a quantitative description of the crowded nature of cytoplasm and of its local variations, to provide a direct readout of the impact of excluded volume on molecular assembly in living cells. Taken together, these studies will provide fundamental insights into the structural basis by which cells form biochemical compartments.
Max ERC Funding
1 228 125 €
Duration
Start date: 2018-02-01, End date: 2023-01-31
Project acronym 4DRepLy
Project Closing the 4D Real World Reconstruction Loop
Researcher (PI) Christian THEOBALT
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary 4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Summary
4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Max ERC Funding
1 977 000 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym ABCTRANSPORT
Project Minimalist multipurpose ATP-binding cassette transporters
Researcher (PI) Dirk Jan Slotboom
Host Institution (HI) RIJKSUNIVERSITEIT GRONINGEN
Call Details Starting Grant (StG), LS1, ERC-2011-StG_20101109
Summary Many Gram-positive (pathogenic) bacteria are dependent on the uptake of vitamins from the environment or from the infected host. We have recently discovered the long-elusive family of membrane protein complexes catalyzing such transport. The vitamin transporters have an unprecedented modular architecture consisting of a single multipurpose energizing module (the Energy Coupling Factor, ECF) and multiple exchangeable membrane proteins responsible for substrate recognition (S-components). The S-components have characteristics of ion-gradient driven transporters (secondary active transporters), whereas the energizing modules are related to ATP-binding cassette (ABC) transporters (primary active transporters).
The aim of the proposal is threefold: First, we will address the question how properties of primary and secondary transporters are combined in ECF transporters to obtain a novel transport mechanism. Second, we will study the fundamental and unresolved question how protein-protein recognition takes place in the hydrophobic environment of the lipid bilayer. The modular nature of the ECF proteins offers a natural system to study the driving forces used for membrane protein interaction. Third, we will assess whether the ECF transport systems could become targets for antibacterial drugs. ECF transporters are found exclusively in prokaryotes, and their activity is often essential for viability of Gram-positive pathogens. Thus they could turn out to be an Achilles’ heel for the organisms.
Structural and mechanistic studies (X-ray crystallography, microscopy, spectroscopy and biochemistry) will reveal how the different transport modes are combined in a single protein complex, how transport is energized and catalyzed, and how protein-protein recognition takes place. Microbiological screens will be developed to search for compounds that inhibit prokaryote-specific steps of the mechanism of ECF transporters.
Summary
Many Gram-positive (pathogenic) bacteria are dependent on the uptake of vitamins from the environment or from the infected host. We have recently discovered the long-elusive family of membrane protein complexes catalyzing such transport. The vitamin transporters have an unprecedented modular architecture consisting of a single multipurpose energizing module (the Energy Coupling Factor, ECF) and multiple exchangeable membrane proteins responsible for substrate recognition (S-components). The S-components have characteristics of ion-gradient driven transporters (secondary active transporters), whereas the energizing modules are related to ATP-binding cassette (ABC) transporters (primary active transporters).
The aim of the proposal is threefold: First, we will address the question how properties of primary and secondary transporters are combined in ECF transporters to obtain a novel transport mechanism. Second, we will study the fundamental and unresolved question how protein-protein recognition takes place in the hydrophobic environment of the lipid bilayer. The modular nature of the ECF proteins offers a natural system to study the driving forces used for membrane protein interaction. Third, we will assess whether the ECF transport systems could become targets for antibacterial drugs. ECF transporters are found exclusively in prokaryotes, and their activity is often essential for viability of Gram-positive pathogens. Thus they could turn out to be an Achilles’ heel for the organisms.
Structural and mechanistic studies (X-ray crystallography, microscopy, spectroscopy and biochemistry) will reveal how the different transport modes are combined in a single protein complex, how transport is energized and catalyzed, and how protein-protein recognition takes place. Microbiological screens will be developed to search for compounds that inhibit prokaryote-specific steps of the mechanism of ECF transporters.
Max ERC Funding
1 500 000 €
Duration
Start date: 2012-01-01, End date: 2017-12-31
Project acronym ABCvolume
Project The ABC of Cell Volume Regulation
Researcher (PI) Berend Poolman
Host Institution (HI) RIJKSUNIVERSITEIT GRONINGEN
Call Details Advanced Grant (AdG), LS1, ERC-2014-ADG
Summary Cell volume regulation is crucial for any living cell because changes in volume determine the metabolic activity through e.g. changes in ionic strength, pH, macromolecular crowding and membrane tension. These physical chemical parameters influence interaction rates and affinities of biomolecules, folding rates, and fold stabilities in vivo. Understanding of the underlying volume regulatory mechanisms has immediate application in biotechnology and health, yet these factors are generally ignored in systems analyses of cellular functions.
My team has uncovered a number of mechanisms and insights of cell volume regulation. The next step forward is to elucidate how the components of a cell volume regulatory circuit work together and control the physicochemical conditions of the cell.
I propose construction of a synthetic cell in which an osmoregulatory transporter and mechanosensitive channel form a minimal volume regulatory network. My group has developed the technology to reconstitute membrane proteins into lipid vesicles (synthetic cells). One of the challenges is to incorporate into the vesicles an efficient pathway for ATP production and maintain energy homeostasis while the load on the system varies. We aim to control the transmembrane flux of osmolytes, which requires elucidation of the molecular mechanism of gating of the osmoregulatory transporter. We will focus on the glycine betaine ABC importer, which is one of the most complex transporters known to date with ten distinct protein domains, transiently interacting with each other.
The proposed synthetic metabolic circuit constitutes a fascinating out-of-equilibrium system, allowing us to understand cell volume regulatory mechanisms in a context and at a level of complexity minimally needed for life. Analysis of this circuit will address many outstanding questions and eventually allow us to design more sophisticated vesicular systems with applications, for example as compartmentalized reaction networks.
Summary
Cell volume regulation is crucial for any living cell because changes in volume determine the metabolic activity through e.g. changes in ionic strength, pH, macromolecular crowding and membrane tension. These physical chemical parameters influence interaction rates and affinities of biomolecules, folding rates, and fold stabilities in vivo. Understanding of the underlying volume regulatory mechanisms has immediate application in biotechnology and health, yet these factors are generally ignored in systems analyses of cellular functions.
My team has uncovered a number of mechanisms and insights of cell volume regulation. The next step forward is to elucidate how the components of a cell volume regulatory circuit work together and control the physicochemical conditions of the cell.
I propose construction of a synthetic cell in which an osmoregulatory transporter and mechanosensitive channel form a minimal volume regulatory network. My group has developed the technology to reconstitute membrane proteins into lipid vesicles (synthetic cells). One of the challenges is to incorporate into the vesicles an efficient pathway for ATP production and maintain energy homeostasis while the load on the system varies. We aim to control the transmembrane flux of osmolytes, which requires elucidation of the molecular mechanism of gating of the osmoregulatory transporter. We will focus on the glycine betaine ABC importer, which is one of the most complex transporters known to date with ten distinct protein domains, transiently interacting with each other.
The proposed synthetic metabolic circuit constitutes a fascinating out-of-equilibrium system, allowing us to understand cell volume regulatory mechanisms in a context and at a level of complexity minimally needed for life. Analysis of this circuit will address many outstanding questions and eventually allow us to design more sophisticated vesicular systems with applications, for example as compartmentalized reaction networks.
Max ERC Funding
2 247 231 €
Duration
Start date: 2015-07-01, End date: 2020-06-30
Project acronym ACDC
Project Algorithms and Complexity of Highly Decentralized Computations
Researcher (PI) Fabian Daniel Kuhn
Host Institution (HI) ALBERT-LUDWIGS-UNIVERSITAET FREIBURG
Call Details Starting Grant (StG), PE6, ERC-2013-StG
Summary "Many of today's and tomorrow's computer systems are built on top of large-scale networks such as, e.g., the Internet, the world wide web, wireless ad hoc and sensor networks, or peer-to-peer networks. Driven by technological advances, new kinds of networks and applications have become possible and we can safely assume that this trend is going to continue. Often modern systems are envisioned to consist of a potentially large number of individual components that are organized in a completely decentralized way. There is no central authority that controls the topology of the network, how nodes join or leave the system, or in which way nodes communicate with each other. Also, many future distributed applications will be built using wireless devices that communicate via radio.
The general objective of the proposed project is to improve our understanding of the algorithmic and theoretical foundations of decentralized distributed systems. From an algorithmic point of view, decentralized networks and computations pose a number of fascinating and unique challenges that are not present in sequential or more standard distributed systems. As communication is limited and mostly between nearby nodes, each node of a large network can only maintain a very restricted view of the global state of the system. This is particularly true if the network can change dynamically, either by nodes joining or leaving the system or if the topology changes over time, e.g., because of the mobility of the devices in case of a wireless network. Nevertheless, the nodes of a network need to coordinate in order to achieve some global goal.
In particular, we plan to study algorithms and lower bounds for basic computation and information dissemination tasks in such systems. In addition, we are particularly interested in the complexity of distributed computations in dynamic and wireless networks."
Summary
"Many of today's and tomorrow's computer systems are built on top of large-scale networks such as, e.g., the Internet, the world wide web, wireless ad hoc and sensor networks, or peer-to-peer networks. Driven by technological advances, new kinds of networks and applications have become possible and we can safely assume that this trend is going to continue. Often modern systems are envisioned to consist of a potentially large number of individual components that are organized in a completely decentralized way. There is no central authority that controls the topology of the network, how nodes join or leave the system, or in which way nodes communicate with each other. Also, many future distributed applications will be built using wireless devices that communicate via radio.
The general objective of the proposed project is to improve our understanding of the algorithmic and theoretical foundations of decentralized distributed systems. From an algorithmic point of view, decentralized networks and computations pose a number of fascinating and unique challenges that are not present in sequential or more standard distributed systems. As communication is limited and mostly between nearby nodes, each node of a large network can only maintain a very restricted view of the global state of the system. This is particularly true if the network can change dynamically, either by nodes joining or leaving the system or if the topology changes over time, e.g., because of the mobility of the devices in case of a wireless network. Nevertheless, the nodes of a network need to coordinate in order to achieve some global goal.
In particular, we plan to study algorithms and lower bounds for basic computation and information dissemination tasks in such systems. In addition, we are particularly interested in the complexity of distributed computations in dynamic and wireless networks."
Max ERC Funding
1 148 000 €
Duration
Start date: 2013-11-01, End date: 2018-10-31
Project acronym ACROSS
Project 3D Reconstruction and Modeling across Different Levels of Abstraction
Researcher (PI) Leif Kobbelt
Host Institution (HI) RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary "Digital 3D models are gaining more and more importance in diverse application fields ranging from computer graphics, multimedia and simulation sciences to engineering, architecture, and medicine. Powerful technologies to digitize the 3D shape of real objects and scenes are becoming available even to consumers. However, the raw geometric data emerging from, e.g., 3D scanning or multi-view stereo often lacks a consistent structure and meta-information which are necessary for the effective deployment of such models in sophisticated down-stream applications like animation, simulation, or CAD/CAM that go beyond mere visualization. Our goal is to develop new fundamental algorithms which transform raw geometric input data into augmented 3D models that are equipped with structural meta information such as feature aligned meshes, patch segmentations, local and global geometric constraints, statistical shape variation data, or even procedural descriptions. Our methodological approach is inspired by the human perceptual system that integrates bottom-up (data-driven) and top-down (model-driven) mechanisms in its hierarchical processing. Similarly we combine algorithms operating on different levels of abstraction into reconstruction and modeling networks. Instead of developing an individual solution for each specific application scenario, we create an eco-system of algorithms for automatic processing and interactive design of highly complex 3D models. A key concept is the information flow across all levels of abstraction in a bottom-up as well as top-down fashion. We not only aim at optimizing geometric representations but in fact at bridging the gap between reconstruction and recognition of geometric objects. The results from this project will make it possible to bring 3D models of real world objects into many highly relevant applications in science, industry, and entertainment, greatly reducing the excessive manual effort that is still necessary today."
Summary
"Digital 3D models are gaining more and more importance in diverse application fields ranging from computer graphics, multimedia and simulation sciences to engineering, architecture, and medicine. Powerful technologies to digitize the 3D shape of real objects and scenes are becoming available even to consumers. However, the raw geometric data emerging from, e.g., 3D scanning or multi-view stereo often lacks a consistent structure and meta-information which are necessary for the effective deployment of such models in sophisticated down-stream applications like animation, simulation, or CAD/CAM that go beyond mere visualization. Our goal is to develop new fundamental algorithms which transform raw geometric input data into augmented 3D models that are equipped with structural meta information such as feature aligned meshes, patch segmentations, local and global geometric constraints, statistical shape variation data, or even procedural descriptions. Our methodological approach is inspired by the human perceptual system that integrates bottom-up (data-driven) and top-down (model-driven) mechanisms in its hierarchical processing. Similarly we combine algorithms operating on different levels of abstraction into reconstruction and modeling networks. Instead of developing an individual solution for each specific application scenario, we create an eco-system of algorithms for automatic processing and interactive design of highly complex 3D models. A key concept is the information flow across all levels of abstraction in a bottom-up as well as top-down fashion. We not only aim at optimizing geometric representations but in fact at bridging the gap between reconstruction and recognition of geometric objects. The results from this project will make it possible to bring 3D models of real world objects into many highly relevant applications in science, industry, and entertainment, greatly reducing the excessive manual effort that is still necessary today."
Max ERC Funding
2 482 000 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym ACUITY
Project Algorithms for coping with uncertainty and intractability
Researcher (PI) Nikhil Bansal
Host Institution (HI) TECHNISCHE UNIVERSITEIT EINDHOVEN
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary The two biggest challenges in solving practical optimization problems are computational intractability, and the presence
of uncertainty: most problems are either NP-hard, or have incomplete input data which
makes an exact computation impossible.
Recently, there has been a huge progress in our understanding of intractability, based on spectacular algorithmic and lower bound techniques. For several problems, especially those with only local constraints, we can design optimum
approximation algorithms that are provably the best possible.
However, typical optimization problems usually involve complex global constraints and are much less understood. The situation is even worse for coping with uncertainty. Most of the algorithms are based on ad-hoc techniques and there is no deeper understanding of what makes various problems easy or hard.
This proposal describes several new directions, together with concrete intermediate goals, that will break important new ground in the theory of approximation and online algorithms. The particular directions we consider are (i) extend the primal dual method to systematically design online algorithms, (ii) build a structural theory of online problems based on work functions, (iii) develop new tools to use the power of strong convex relaxations and (iv) design new algorithmic approaches based on non-constructive proof techniques.
The proposed research is at the
cutting edge of algorithm design, and builds upon the recent success of the PI in resolving several longstanding questions in these areas. Any progress is likely to be a significant contribution to theoretical
computer science and combinatorial optimization.
Summary
The two biggest challenges in solving practical optimization problems are computational intractability, and the presence
of uncertainty: most problems are either NP-hard, or have incomplete input data which
makes an exact computation impossible.
Recently, there has been a huge progress in our understanding of intractability, based on spectacular algorithmic and lower bound techniques. For several problems, especially those with only local constraints, we can design optimum
approximation algorithms that are provably the best possible.
However, typical optimization problems usually involve complex global constraints and are much less understood. The situation is even worse for coping with uncertainty. Most of the algorithms are based on ad-hoc techniques and there is no deeper understanding of what makes various problems easy or hard.
This proposal describes several new directions, together with concrete intermediate goals, that will break important new ground in the theory of approximation and online algorithms. The particular directions we consider are (i) extend the primal dual method to systematically design online algorithms, (ii) build a structural theory of online problems based on work functions, (iii) develop new tools to use the power of strong convex relaxations and (iv) design new algorithmic approaches based on non-constructive proof techniques.
The proposed research is at the
cutting edge of algorithm design, and builds upon the recent success of the PI in resolving several longstanding questions in these areas. Any progress is likely to be a significant contribution to theoretical
computer science and combinatorial optimization.
Max ERC Funding
1 519 285 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym AFMIDMOA
Project "Applying Fundamental Mathematics in Discrete Mathematics, Optimization, and Algorithmics"
Researcher (PI) Alexander Schrijver
Host Institution (HI) UNIVERSITEIT VAN AMSTERDAM
Call Details Advanced Grant (AdG), PE1, ERC-2013-ADG
Summary "This proposal aims at strengthening the connections between more fundamentally oriented areas of mathematics like algebra, geometry, analysis, and topology, and the more applied oriented and more recently emerging disciplines of discrete mathematics, optimization, and algorithmics.
The overall goal of the project is to obtain, with methods from fundamental mathematics, new effective tools to unravel the complexity of structures like graphs, networks, codes, knots, polynomials, and tensors, and to get a grip on such complex structures by new efficient characterizations, sharper bounds, and faster algorithms.
In the last few years, there have been several new developments where methods from representation theory, invariant theory, algebraic geometry, measure theory, functional analysis, and topology found new applications in discrete mathematics and optimization, both theoretically and algorithmically. Among the typical application areas are networks, coding, routing, timetabling, statistical and quantum physics, and computer science.
The project focuses in particular on:
A. Understanding partition functions with invariant theory and algebraic geometry
B. Graph limits, regularity, Hilbert spaces, and low rank approximation of polynomials
C. Reducing complexity in optimization by exploiting symmetry with representation theory
D. Reducing complexity in discrete optimization by homotopy and cohomology
These research modules are interconnected by themes like symmetry, regularity, and complexity, and by common methods from algebra, analysis, geometry, and topology."
Summary
"This proposal aims at strengthening the connections between more fundamentally oriented areas of mathematics like algebra, geometry, analysis, and topology, and the more applied oriented and more recently emerging disciplines of discrete mathematics, optimization, and algorithmics.
The overall goal of the project is to obtain, with methods from fundamental mathematics, new effective tools to unravel the complexity of structures like graphs, networks, codes, knots, polynomials, and tensors, and to get a grip on such complex structures by new efficient characterizations, sharper bounds, and faster algorithms.
In the last few years, there have been several new developments where methods from representation theory, invariant theory, algebraic geometry, measure theory, functional analysis, and topology found new applications in discrete mathematics and optimization, both theoretically and algorithmically. Among the typical application areas are networks, coding, routing, timetabling, statistical and quantum physics, and computer science.
The project focuses in particular on:
A. Understanding partition functions with invariant theory and algebraic geometry
B. Graph limits, regularity, Hilbert spaces, and low rank approximation of polynomials
C. Reducing complexity in optimization by exploiting symmetry with representation theory
D. Reducing complexity in discrete optimization by homotopy and cohomology
These research modules are interconnected by themes like symmetry, regularity, and complexity, and by common methods from algebra, analysis, geometry, and topology."
Max ERC Funding
2 001 598 €
Duration
Start date: 2014-01-01, End date: 2018-12-31
Project acronym ALEXANDRIA
Project "Foundations for Temporal Retrieval, Exploration and Analytics in Web Archives"
Researcher (PI) Wolfgang Nejdl
Host Institution (HI) GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVER
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary "Significant parts of our cultural heritage are produced on the Web, yet only insufficient opportunities exist for accessing and exploring the past of the Web. The ALEXANDRIA project aims to develop models, tools and techniques necessary to archive and index relevant parts of the Web, and to retrieve and explore this information in a meaningful way. While the easy accessibility to the current Web is a good baseline, optimal access to Web archives requires new models and algorithms for retrieval, exploration, and analytics which go far beyond what is needed to access the current state of the Web. This includes taking into account the unique temporal dimension of Web archives, structured semantic information already available on the Web, as well as social media and network information.
Within ALEXANDRIA, we will significantly advance semantic and time-based indexing for Web archives using human-compiled knowledge available on the Web, to efficiently index, retrieve and explore information about entities and events from the past. In doing so, we will focus on the concurrent evolution of this knowledge and the Web content to be indexed, and take into account diversity and incompleteness of this knowledge. We will further investigate mixed crowd- and machine-based Web analytics to support long- running and collaborative retrieval and analysis processes on Web archives. Usage of implicit human feedback will be essential to provide better indexing through insights during the analysis process and to better focus harvesting of content.
The ALEXANDRIA Testbed will provide an important context for research, exploration and evaluation of the concepts, methods and algorithms developed in this project, and will provide both relevant collections and algorithms that enable further research on and practical application of our research results to existing archives like the Internet Archive, the Internet Memory Foundation and Web archives maintained by European national libraries."
Summary
"Significant parts of our cultural heritage are produced on the Web, yet only insufficient opportunities exist for accessing and exploring the past of the Web. The ALEXANDRIA project aims to develop models, tools and techniques necessary to archive and index relevant parts of the Web, and to retrieve and explore this information in a meaningful way. While the easy accessibility to the current Web is a good baseline, optimal access to Web archives requires new models and algorithms for retrieval, exploration, and analytics which go far beyond what is needed to access the current state of the Web. This includes taking into account the unique temporal dimension of Web archives, structured semantic information already available on the Web, as well as social media and network information.
Within ALEXANDRIA, we will significantly advance semantic and time-based indexing for Web archives using human-compiled knowledge available on the Web, to efficiently index, retrieve and explore information about entities and events from the past. In doing so, we will focus on the concurrent evolution of this knowledge and the Web content to be indexed, and take into account diversity and incompleteness of this knowledge. We will further investigate mixed crowd- and machine-based Web analytics to support long- running and collaborative retrieval and analysis processes on Web archives. Usage of implicit human feedback will be essential to provide better indexing through insights during the analysis process and to better focus harvesting of content.
The ALEXANDRIA Testbed will provide an important context for research, exploration and evaluation of the concepts, methods and algorithms developed in this project, and will provide both relevant collections and algorithms that enable further research on and practical application of our research results to existing archives like the Internet Archive, the Internet Memory Foundation and Web archives maintained by European national libraries."
Max ERC Funding
2 493 600 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym ALGSTRONGCRYPTO
Project Algebraic Methods for Stronger Crypto
Researcher (PI) Ronald John Fitzgerald CRAMER
Host Institution (HI) STICHTING NEDERLANDSE WETENSCHAPPELIJK ONDERZOEK INSTITUTEN
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Our field is cryptology. Our overarching objective is to advance significantly the frontiers in
design and analysis of high-security cryptography for the future generation.
Particularly, we wish to enhance the efficiency, functionality, and, last-but-not-least, fundamental understanding of cryptographic security against very powerful adversaries.
Our approach here is to develop completely novel methods by
deepening, strengthening and broadening the
algebraic foundations of the field.
Concretely, our lens builds on
the arithmetic codex. This is a general, abstract cryptographic primitive whose basic theory we recently developed and whose asymptotic part, which relies on algebraic geometry, enjoys crucial applications in surprising foundational results on constant communication-rate two-party cryptography. A codex is a linear (error correcting) code that, when endowing its ambient vector space just with coordinate-wise multiplication, can be viewed as simulating, up to some degree, richer arithmetical structures such as finite fields (or products thereof), or generally, finite-dimensional algebras over finite fields. Besides this degree, coordinate-localities for which simulation holds and for which it does not at all are also captured.
Our method is based on novel perspectives on codices which significantly
widen their scope and strengthen their utility. Particularly, we bring
symmetries, computational- and complexity theoretic aspects, and connections with algebraic number theory, -geometry, and -combinatorics into play in novel ways. Our applications range from public-key cryptography to secure multi-party computation.
Our proposal is subdivided into 3 interconnected modules:
(1) Algebraic- and Number Theoretical Cryptanalysis
(2) Construction of Algebraic Crypto Primitives
(3) Advanced Theory of Arithmetic Codices
Summary
Our field is cryptology. Our overarching objective is to advance significantly the frontiers in
design and analysis of high-security cryptography for the future generation.
Particularly, we wish to enhance the efficiency, functionality, and, last-but-not-least, fundamental understanding of cryptographic security against very powerful adversaries.
Our approach here is to develop completely novel methods by
deepening, strengthening and broadening the
algebraic foundations of the field.
Concretely, our lens builds on
the arithmetic codex. This is a general, abstract cryptographic primitive whose basic theory we recently developed and whose asymptotic part, which relies on algebraic geometry, enjoys crucial applications in surprising foundational results on constant communication-rate two-party cryptography. A codex is a linear (error correcting) code that, when endowing its ambient vector space just with coordinate-wise multiplication, can be viewed as simulating, up to some degree, richer arithmetical structures such as finite fields (or products thereof), or generally, finite-dimensional algebras over finite fields. Besides this degree, coordinate-localities for which simulation holds and for which it does not at all are also captured.
Our method is based on novel perspectives on codices which significantly
widen their scope and strengthen their utility. Particularly, we bring
symmetries, computational- and complexity theoretic aspects, and connections with algebraic number theory, -geometry, and -combinatorics into play in novel ways. Our applications range from public-key cryptography to secure multi-party computation.
Our proposal is subdivided into 3 interconnected modules:
(1) Algebraic- and Number Theoretical Cryptanalysis
(2) Construction of Algebraic Crypto Primitives
(3) Advanced Theory of Arithmetic Codices
Max ERC Funding
2 447 439 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym AMAREC
Project Amenability, Approximation and Reconstruction
Researcher (PI) Wilhelm WINTER
Host Institution (HI) WESTFAELISCHE WILHELMS-UNIVERSITAET MUENSTER
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary Algebras of operators on Hilbert spaces were originally introduced as the right framework for the mathematical description of quantum mechanics. In modern mathematics the scope has much broadened due to the highly versatile nature of operator algebras. They are particularly useful in the analysis of groups and their actions. Amenability is a finiteness property which occurs in many different contexts and which can be characterised in many different ways. We will analyse amenability in terms of approximation properties, in the frameworks of abstract C*-algebras, of topological dynamical systems, and of discrete groups. Such approximation properties will serve as bridging devices between these setups, and they will be used to systematically recover geometric information about the underlying structures. When passing from groups, and more generally from dynamical systems, to operator algebras, one loses information, but one gains new tools to isolate and analyse pertinent properties of the underlying structure. We will mostly be interested in the topological setting, and in the associated C*-algebras. Amenability of groups or of dynamical systems then translates into the completely positive approximation property. Systems of completely positive approximations store all the essential data about a C*-algebra, and sometimes one can arrange the systems so that one can directly read of such information. For transformation group C*-algebras, one can achieve this by using approximation properties of the underlying dynamics. To some extent one can even go back, and extract dynamical approximation properties from completely positive approximations of the C*-algebra. This interplay between approximation properties in topological dynamics and in noncommutative topology carries a surprisingly rich structure. It connects directly to the heart of the classification problem for nuclear C*-algebras on the one hand, and to central open questions on amenable dynamics on the other.
Summary
Algebras of operators on Hilbert spaces were originally introduced as the right framework for the mathematical description of quantum mechanics. In modern mathematics the scope has much broadened due to the highly versatile nature of operator algebras. They are particularly useful in the analysis of groups and their actions. Amenability is a finiteness property which occurs in many different contexts and which can be characterised in many different ways. We will analyse amenability in terms of approximation properties, in the frameworks of abstract C*-algebras, of topological dynamical systems, and of discrete groups. Such approximation properties will serve as bridging devices between these setups, and they will be used to systematically recover geometric information about the underlying structures. When passing from groups, and more generally from dynamical systems, to operator algebras, one loses information, but one gains new tools to isolate and analyse pertinent properties of the underlying structure. We will mostly be interested in the topological setting, and in the associated C*-algebras. Amenability of groups or of dynamical systems then translates into the completely positive approximation property. Systems of completely positive approximations store all the essential data about a C*-algebra, and sometimes one can arrange the systems so that one can directly read of such information. For transformation group C*-algebras, one can achieve this by using approximation properties of the underlying dynamics. To some extent one can even go back, and extract dynamical approximation properties from completely positive approximations of the C*-algebra. This interplay between approximation properties in topological dynamics and in noncommutative topology carries a surprisingly rich structure. It connects directly to the heart of the classification problem for nuclear C*-algebras on the one hand, and to central open questions on amenable dynamics on the other.
Max ERC Funding
1 596 017 €
Duration
Start date: 2019-10-01, End date: 2024-09-30
Project acronym AMPLify
Project Allocation Made PracticaL
Researcher (PI) Toby Walsh
Host Institution (HI) TECHNISCHE UNIVERSITAT BERLIN
Call Details Advanced Grant (AdG), PE6, ERC-2014-ADG
Summary Allocation Made PracticaL
The AMPLify project will lay the foundations of a new field, computational behavioural game theory that brings a computational perspective, computational implementation, and behavioural insights to game theory. These foundations will be laid by tackling a pressing problem facing society today: the efficient and fair allocation of resources and costs. Research in allocation has previously considered simple, abstract models like cake cutting. We propose to develop richer models that capture important new features like asynchronicity which occur in many markets being developed in our highly connected and online world. The mechanisms currently used to allocate resources and costs are limited to these simple, abstract models and also do not take into account how people actually behave in practice. We will therefore design new mechanisms for these richer allocation problems that exploit insights gained from behavioural game theory like loss aversion. We will also tackle the complexity of these rich models and mechanisms with computational tools. Finally, we will use computation to increase both the efficiency and fairness of allocations. As a result, we will be able to do more with fewer resources and greater fairness. Our initial case studies in resource and cost allocation demonstrate that we can improve efficiency greatly, offering one company alone savings of up to 10% (which is worth tens of millions of dollars every year). We predict even greater impact with the more sophisticated mechanisms to be developed during the course of this project.
Summary
Allocation Made PracticaL
The AMPLify project will lay the foundations of a new field, computational behavioural game theory that brings a computational perspective, computational implementation, and behavioural insights to game theory. These foundations will be laid by tackling a pressing problem facing society today: the efficient and fair allocation of resources and costs. Research in allocation has previously considered simple, abstract models like cake cutting. We propose to develop richer models that capture important new features like asynchronicity which occur in many markets being developed in our highly connected and online world. The mechanisms currently used to allocate resources and costs are limited to these simple, abstract models and also do not take into account how people actually behave in practice. We will therefore design new mechanisms for these richer allocation problems that exploit insights gained from behavioural game theory like loss aversion. We will also tackle the complexity of these rich models and mechanisms with computational tools. Finally, we will use computation to increase both the efficiency and fairness of allocations. As a result, we will be able to do more with fewer resources and greater fairness. Our initial case studies in resource and cost allocation demonstrate that we can improve efficiency greatly, offering one company alone savings of up to 10% (which is worth tens of millions of dollars every year). We predict even greater impact with the more sophisticated mechanisms to be developed during the course of this project.
Max ERC Funding
2 499 681 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym AMPLIFY
Project Amplifying Human Perception Through Interactive Digital Technologies
Researcher (PI) Albrecht Schmidt
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Summary
Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Max ERC Funding
1 925 250 €
Duration
Start date: 2016-07-01, End date: 2021-06-30
Project acronym ANAMULTISCALE
Project Analysis of Multiscale Systems Driven by Functionals
Researcher (PI) Alexander Mielke
Host Institution (HI) FORSCHUNGSVERBUND BERLIN EV
Call Details Advanced Grant (AdG), PE1, ERC-2010-AdG_20100224
Summary Many complex phenomena in the sciences are described by nonlinear partial differential equations, the solutions of which exhibit oscillations and concentration effects on multiple temporal or spatial scales. Our aim is to use methods from applied analysis to contribute to the understanding of the interplay of effects on different scales. The central question is to determine those quantities on the microscale which are needed to for the correct description of the macroscopic evolution.
We aim to develop a mathematical framework for analyzing and modeling coupled systems with multiple scales. This will include Hamiltonian dynamics as well as different types of dissipation like gradient flows or rate-independent dynamics. The choice of models will be guided by specific applications in material modeling (e.g., thermoplasticity, pattern formation, porous media) and optoelectronics (pulse interaction, Maxwell-Bloch systems, semiconductors, quantum mechanics). The research will address mathematically fundamental issues like existence and stability of solutions but will mainly be devoted to the modeling of multiscale phenomena in evolution systems. We will focus on systems with geometric structures, where the dynamics is driven by functionals. Thus, we can go much beyond the classical theory of homogenization and singular perturbations. The novel features of our approach are
- the combination of different dynamical effects in one framework,
- the use of geometric and metric structures for coupled partial differential equations,
- the exploitation of Gamma-convergence for evolution systems driven by functionals.
Summary
Many complex phenomena in the sciences are described by nonlinear partial differential equations, the solutions of which exhibit oscillations and concentration effects on multiple temporal or spatial scales. Our aim is to use methods from applied analysis to contribute to the understanding of the interplay of effects on different scales. The central question is to determine those quantities on the microscale which are needed to for the correct description of the macroscopic evolution.
We aim to develop a mathematical framework for analyzing and modeling coupled systems with multiple scales. This will include Hamiltonian dynamics as well as different types of dissipation like gradient flows or rate-independent dynamics. The choice of models will be guided by specific applications in material modeling (e.g., thermoplasticity, pattern formation, porous media) and optoelectronics (pulse interaction, Maxwell-Bloch systems, semiconductors, quantum mechanics). The research will address mathematically fundamental issues like existence and stability of solutions but will mainly be devoted to the modeling of multiscale phenomena in evolution systems. We will focus on systems with geometric structures, where the dynamics is driven by functionals. Thus, we can go much beyond the classical theory of homogenization and singular perturbations. The novel features of our approach are
- the combination of different dynamical effects in one framework,
- the use of geometric and metric structures for coupled partial differential equations,
- the exploitation of Gamma-convergence for evolution systems driven by functionals.
Max ERC Funding
1 390 000 €
Duration
Start date: 2011-04-01, End date: 2017-03-31
Project acronym ANOPTSETCON
Project Analysis of optimal sets and optimal constants: old questions and new results
Researcher (PI) Aldo Pratelli
Host Institution (HI) FRIEDRICH-ALEXANDER-UNIVERSITAET ERLANGEN NUERNBERG
Call Details Starting Grant (StG), PE1, ERC-2010-StG_20091028
Summary The analysis of geometric and functional inequalities naturally leads to consider the extremal cases, thus
looking for optimal sets, or optimal functions, or optimal constants. The most classical examples are the (different versions of the) isoperimetric inequality and the Sobolev-like inequalities. Much is known about equality cases and best constants, but there are still many questions which seem quite natural but yet have no answer. For instance, it is not known, even in the 2-dimensional space, the answer of a question by Brezis: which set,
among those with a given volume, has the biggest Sobolev-Poincaré constant for p=1? This is a very natural problem, and it appears reasonable that the optimal set should be the ball, but this has never been proved. The interest in problems like this relies not only in the extreme simplicity of the questions and in their classical flavour, but also in the new ideas and techniques which are needed to provide the answers.
The main techniques that we aim to use are fine arguments of symmetrization, geometric constructions and tools from mass transportation (which is well known to be deeply connected with functional inequalities). These are the basic tools that we already used to reach, in last years, many results in a specific direction, namely the search of sharp quantitative inequalities. Our first result, together with Fusco and Maggi, showed what follows. Everybody knows that the set which minimizes the perimeter with given volume is the ball.
But is it true that a set which almost minimizes the perimeter must be close to a ball? The question had been posed in the 1920's and many partial result appeared in the years. In our paper (Ann. of Math., 2007) we proved the sharp result. Many other results of this kind were obtained in last two years.
Summary
The analysis of geometric and functional inequalities naturally leads to consider the extremal cases, thus
looking for optimal sets, or optimal functions, or optimal constants. The most classical examples are the (different versions of the) isoperimetric inequality and the Sobolev-like inequalities. Much is known about equality cases and best constants, but there are still many questions which seem quite natural but yet have no answer. For instance, it is not known, even in the 2-dimensional space, the answer of a question by Brezis: which set,
among those with a given volume, has the biggest Sobolev-Poincaré constant for p=1? This is a very natural problem, and it appears reasonable that the optimal set should be the ball, but this has never been proved. The interest in problems like this relies not only in the extreme simplicity of the questions and in their classical flavour, but also in the new ideas and techniques which are needed to provide the answers.
The main techniques that we aim to use are fine arguments of symmetrization, geometric constructions and tools from mass transportation (which is well known to be deeply connected with functional inequalities). These are the basic tools that we already used to reach, in last years, many results in a specific direction, namely the search of sharp quantitative inequalities. Our first result, together with Fusco and Maggi, showed what follows. Everybody knows that the set which minimizes the perimeter with given volume is the ball.
But is it true that a set which almost minimizes the perimeter must be close to a ball? The question had been posed in the 1920's and many partial result appeared in the years. In our paper (Ann. of Math., 2007) we proved the sharp result. Many other results of this kind were obtained in last two years.
Max ERC Funding
540 000 €
Duration
Start date: 2010-08-01, End date: 2015-07-31
Project acronym ANTHOS
Project Analytic Number Theory: Higher Order Structures
Researcher (PI) Valentin Blomer
Host Institution (HI) GEORG-AUGUST-UNIVERSITAT GOTTINGENSTIFTUNG OFFENTLICHEN RECHTS
Call Details Starting Grant (StG), PE1, ERC-2010-StG_20091028
Summary This is a proposal for research at the interface of analytic number theory, automorphic forms and algebraic geometry. Motivated by fundamental conjectures in number theory, classical problems will be investigated in higher order situations: general number fields, automorphic forms on higher rank groups, the arithmetic of algebraic varieties of higher degree. In particular, I want to focus on
- computation of moments of L-function of degree 3 and higher with applications to subconvexity and/or non-vanishing, as well as subconvexity for multiple L-functions;
- bounds for sup-norms of cusp forms on various spaces and equidistribution of Hecke correspondences;
- automorphic forms on higher rank groups and general number fields, in particular new bounds towards the Ramanujan conjecture;
- a proof of Manin's conjecture for a certain class of singular algebraic varieties.
The underlying methods are closely related; for example, rational points on algebraic varieties
will be counted by a multiple L-series technique.
Summary
This is a proposal for research at the interface of analytic number theory, automorphic forms and algebraic geometry. Motivated by fundamental conjectures in number theory, classical problems will be investigated in higher order situations: general number fields, automorphic forms on higher rank groups, the arithmetic of algebraic varieties of higher degree. In particular, I want to focus on
- computation of moments of L-function of degree 3 and higher with applications to subconvexity and/or non-vanishing, as well as subconvexity for multiple L-functions;
- bounds for sup-norms of cusp forms on various spaces and equidistribution of Hecke correspondences;
- automorphic forms on higher rank groups and general number fields, in particular new bounds towards the Ramanujan conjecture;
- a proof of Manin's conjecture for a certain class of singular algebraic varieties.
The underlying methods are closely related; for example, rational points on algebraic varieties
will be counted by a multiple L-series technique.
Max ERC Funding
1 004 000 €
Duration
Start date: 2010-10-01, End date: 2015-09-30
Project acronym ANTICIPATE
Project Anticipatory Human-Computer Interaction
Researcher (PI) Andreas BULLING
Host Institution (HI) UNIVERSITAET STUTTGART
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Even after three decades of research on human-computer interaction (HCI), current general-purpose user interfaces (UI) still lack the ability to attribute mental states to their users, i.e. they fail to understand users' intentions and needs and to anticipate their actions. This drastically restricts their interactive capabilities.
ANTICIPATE aims to establish the scientific foundations for a new generation of user interfaces that pro-actively adapt to users' future input actions by monitoring their attention and predicting their interaction intentions - thereby significantly improving the naturalness, efficiency, and user experience of the interactions. Realising this vision of anticipatory human-computer interaction requires groundbreaking advances in everyday sensing of user attention from eye and brain activity. We will further pioneer methods to predict entangled user intentions and forecast interactive behaviour with fine temporal granularity during interactions in everyday stationary and mobile settings. Finally, we will develop fundamental interaction paradigms that enable anticipatory UIs to pro-actively adapt to users' attention and intentions in a mindful way. The new capabilities will be demonstrated in four challenging cases: 1) mobile information retrieval, 2) intelligent notification management, 3) Autism diagnosis and monitoring, and 4) computer-based training.
Anticipatory human-computer interaction offers a strong complement to existing UI paradigms that only react to user input post-hoc. If successful, ANTICIPATE will deliver the first important building blocks for implementing Theory of Mind in general-purpose UIs. As such, the project has the potential to drastically improve the billions of interactions we perform with computers every day, to trigger a wide range of follow-up research in HCI as well as adjacent areas within and outside computer science, and to act as a key technical enabler for new applications, e.g. in healthcare and education.
Summary
Even after three decades of research on human-computer interaction (HCI), current general-purpose user interfaces (UI) still lack the ability to attribute mental states to their users, i.e. they fail to understand users' intentions and needs and to anticipate their actions. This drastically restricts their interactive capabilities.
ANTICIPATE aims to establish the scientific foundations for a new generation of user interfaces that pro-actively adapt to users' future input actions by monitoring their attention and predicting their interaction intentions - thereby significantly improving the naturalness, efficiency, and user experience of the interactions. Realising this vision of anticipatory human-computer interaction requires groundbreaking advances in everyday sensing of user attention from eye and brain activity. We will further pioneer methods to predict entangled user intentions and forecast interactive behaviour with fine temporal granularity during interactions in everyday stationary and mobile settings. Finally, we will develop fundamental interaction paradigms that enable anticipatory UIs to pro-actively adapt to users' attention and intentions in a mindful way. The new capabilities will be demonstrated in four challenging cases: 1) mobile information retrieval, 2) intelligent notification management, 3) Autism diagnosis and monitoring, and 4) computer-based training.
Anticipatory human-computer interaction offers a strong complement to existing UI paradigms that only react to user input post-hoc. If successful, ANTICIPATE will deliver the first important building blocks for implementing Theory of Mind in general-purpose UIs. As such, the project has the potential to drastically improve the billions of interactions we perform with computers every day, to trigger a wide range of follow-up research in HCI as well as adjacent areas within and outside computer science, and to act as a key technical enabler for new applications, e.g. in healthcare and education.
Max ERC Funding
1 499 625 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym APEG
Project Algorithmic Performance Guarantees: Foundations and Applications
Researcher (PI) Susanne ALBERS
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary Optimization problems are ubiquitous in computer science. Almost every problem involves the optimization of some objective function. However a major part of these problems cannot be solved to optimality. Therefore, algorithms that achieve provably good performance guarantees are of immense importance. Considerable progress has already been made, but great challenges remain: Some fundamental problems are not well understood. Moreover, for central problems arising in new applications, no solutions are known at all.
The goal of APEG is to significantly advance the state of the art on algorithmic performance guarantees. Specifically, the project has two missions: First, it will develop new algorithmic techniques, breaking new ground in the areas of online algorithms, approximations algorithms and algorithmic game theory. Second, it will apply these techniques to solve fundamental problems that are central in these algorithmic disciplines. APEG will attack long-standing open problems, some of which have been unresolved for several decades. Furthermore, it will formulate and investigate new algorithmic problems that arise in modern applications. The research agenda encompasses a broad spectrum of classical and timely topics including (a) resource allocation in computer systems, (b) data structuring, (c) graph problems, with relations to Internet advertising, (d) complex networks and (e) massively parallel systems. In addition to basic optimization objectives, the project will also study the new performance metric of energy minimization in computer systems.
Overall, APEG pursues cutting-edge algorithms research, focusing on both foundational problems and applications. Any progress promises to be a breakthrough or significant contribution.
Summary
Optimization problems are ubiquitous in computer science. Almost every problem involves the optimization of some objective function. However a major part of these problems cannot be solved to optimality. Therefore, algorithms that achieve provably good performance guarantees are of immense importance. Considerable progress has already been made, but great challenges remain: Some fundamental problems are not well understood. Moreover, for central problems arising in new applications, no solutions are known at all.
The goal of APEG is to significantly advance the state of the art on algorithmic performance guarantees. Specifically, the project has two missions: First, it will develop new algorithmic techniques, breaking new ground in the areas of online algorithms, approximations algorithms and algorithmic game theory. Second, it will apply these techniques to solve fundamental problems that are central in these algorithmic disciplines. APEG will attack long-standing open problems, some of which have been unresolved for several decades. Furthermore, it will formulate and investigate new algorithmic problems that arise in modern applications. The research agenda encompasses a broad spectrum of classical and timely topics including (a) resource allocation in computer systems, (b) data structuring, (c) graph problems, with relations to Internet advertising, (d) complex networks and (e) massively parallel systems. In addition to basic optimization objectives, the project will also study the new performance metric of energy minimization in computer systems.
Overall, APEG pursues cutting-edge algorithms research, focusing on both foundational problems and applications. Any progress promises to be a breakthrough or significant contribution.
Max ERC Funding
2 404 250 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym AQSER
Project Automorphic q-series and their application
Researcher (PI) Kathrin Bringmann
Host Institution (HI) UNIVERSITAET ZU KOELN
Call Details Starting Grant (StG), PE1, ERC-2013-StG
Summary This proposal aims to unravel mysteries at the frontier of number theory and other areas of mathematics and physics. The main focus will be to understand and exploit “modularity” of q-hypergeometric series. “Modular forms are functions on the complex plane that are inordinately symmetric.” (Mazur) The motivation comes from the wide-reaching applications of modularity in combinatorics, percolation, Lie theory, and physics (black holes).
The interplay between automorphic forms, q-series, and other areas of mathematics and physics is often two-sided. On the one hand, the other areas provide interesting examples of automorphic objects and predict their behavior. Sometimes these even motivate new classes of automorphic objects which have not been previously studied. On the other hand, knowing that certain generating functions are modular gives one access to deep theoretical tools to prove results in other areas. “Mathematics is a language, and we need that language to understand the physics of our universe.”(Ooguri) Understanding this interplay has attracted attention of researchers from a variety of areas. However, proofs of modularity of q-hypergeometric series currently fall far short of a comprehensive theory to describe the interplay between them and automorphic forms. A recent conjecture of W. Nahm relates the modularity of such series to K-theory. In this proposal I aim to fill this gap and provide a better understanding of this interplay by building a general structural framework enveloping these q-series. For this I will employ new kinds of automorphic objects and embed the functions of interest into bigger families
A successful outcome of the proposed research will open further horizons and also answer open questions, even those in other areas which were not addressed in this proposal; for example the new theory could be applied to better understand Donaldson invariants.
Summary
This proposal aims to unravel mysteries at the frontier of number theory and other areas of mathematics and physics. The main focus will be to understand and exploit “modularity” of q-hypergeometric series. “Modular forms are functions on the complex plane that are inordinately symmetric.” (Mazur) The motivation comes from the wide-reaching applications of modularity in combinatorics, percolation, Lie theory, and physics (black holes).
The interplay between automorphic forms, q-series, and other areas of mathematics and physics is often two-sided. On the one hand, the other areas provide interesting examples of automorphic objects and predict their behavior. Sometimes these even motivate new classes of automorphic objects which have not been previously studied. On the other hand, knowing that certain generating functions are modular gives one access to deep theoretical tools to prove results in other areas. “Mathematics is a language, and we need that language to understand the physics of our universe.”(Ooguri) Understanding this interplay has attracted attention of researchers from a variety of areas. However, proofs of modularity of q-hypergeometric series currently fall far short of a comprehensive theory to describe the interplay between them and automorphic forms. A recent conjecture of W. Nahm relates the modularity of such series to K-theory. In this proposal I aim to fill this gap and provide a better understanding of this interplay by building a general structural framework enveloping these q-series. For this I will employ new kinds of automorphic objects and embed the functions of interest into bigger families
A successful outcome of the proposed research will open further horizons and also answer open questions, even those in other areas which were not addressed in this proposal; for example the new theory could be applied to better understand Donaldson invariants.
Max ERC Funding
1 240 500 €
Duration
Start date: 2014-01-01, End date: 2019-04-30