Project acronym 100 Archaic Genomes
Project Genome sequences from extinct hominins
Researcher (PI) Svante PÄÄBO
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Advanced Grant (AdG), LS2, ERC-2015-AdG
Summary Neandertals and Denisovans, an Asian group distantly related to Neandertals, are the closest evolutionary relatives of present-day humans. They are thus of direct relevance for understanding the origin of modern humans and how modern humans differ from their closest relatives. We will generate genome-wide data from a large number of Neandertal and Denisovan individuals from across their geographical and temporal range as well as from other extinct hominin groups which we may discover. This will be possible by automating highly sensitive approaches to ancient DNA extraction and DNA libraries construction that we have developed so that they can be applied to many specimens from many sites in order to identify those that contain retrievable DNA. Whenever possible we will sequence whole genomes and in other cases use DNA capture methods to generate high-quality data from representative parts of the genome. This will allow us to study the population history of Neandertals and Denisovans, elucidate how many times and where these extinct hominins contributed genes to present-day people, and the extent to which modern humans and archaic groups contributed genetically to Neandertals and Denisovans. By retrieving DNA from specimens that go back to the Middle Pleistocene we will furthermore shed light on the early history and origins of Neandertals and Denisovans.
Summary
Neandertals and Denisovans, an Asian group distantly related to Neandertals, are the closest evolutionary relatives of present-day humans. They are thus of direct relevance for understanding the origin of modern humans and how modern humans differ from their closest relatives. We will generate genome-wide data from a large number of Neandertal and Denisovan individuals from across their geographical and temporal range as well as from other extinct hominin groups which we may discover. This will be possible by automating highly sensitive approaches to ancient DNA extraction and DNA libraries construction that we have developed so that they can be applied to many specimens from many sites in order to identify those that contain retrievable DNA. Whenever possible we will sequence whole genomes and in other cases use DNA capture methods to generate high-quality data from representative parts of the genome. This will allow us to study the population history of Neandertals and Denisovans, elucidate how many times and where these extinct hominins contributed genes to present-day people, and the extent to which modern humans and archaic groups contributed genetically to Neandertals and Denisovans. By retrieving DNA from specimens that go back to the Middle Pleistocene we will furthermore shed light on the early history and origins of Neandertals and Denisovans.
Max ERC Funding
2 350 000 €
Duration
Start date: 2016-11-01, End date: 2021-10-31
Project acronym 3D Reloaded
Project 3D Reloaded: Novel Algorithms for 3D Shape Inference and Analysis
Researcher (PI) Daniel Cremers
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Summary
Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Max ERC Funding
2 000 000 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym 4DRepLy
Project Closing the 4D Real World Reconstruction Loop
Researcher (PI) Christian THEOBALT
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary 4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Summary
4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Max ERC Funding
1 977 000 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym AMPLify
Project Allocation Made PracticaL
Researcher (PI) Toby Walsh
Host Institution (HI) TECHNISCHE UNIVERSITAT BERLIN
Call Details Advanced Grant (AdG), PE6, ERC-2014-ADG
Summary Allocation Made PracticaL
The AMPLify project will lay the foundations of a new field, computational behavioural game theory that brings a computational perspective, computational implementation, and behavioural insights to game theory. These foundations will be laid by tackling a pressing problem facing society today: the efficient and fair allocation of resources and costs. Research in allocation has previously considered simple, abstract models like cake cutting. We propose to develop richer models that capture important new features like asynchronicity which occur in many markets being developed in our highly connected and online world. The mechanisms currently used to allocate resources and costs are limited to these simple, abstract models and also do not take into account how people actually behave in practice. We will therefore design new mechanisms for these richer allocation problems that exploit insights gained from behavioural game theory like loss aversion. We will also tackle the complexity of these rich models and mechanisms with computational tools. Finally, we will use computation to increase both the efficiency and fairness of allocations. As a result, we will be able to do more with fewer resources and greater fairness. Our initial case studies in resource and cost allocation demonstrate that we can improve efficiency greatly, offering one company alone savings of up to 10% (which is worth tens of millions of dollars every year). We predict even greater impact with the more sophisticated mechanisms to be developed during the course of this project.
Summary
Allocation Made PracticaL
The AMPLify project will lay the foundations of a new field, computational behavioural game theory that brings a computational perspective, computational implementation, and behavioural insights to game theory. These foundations will be laid by tackling a pressing problem facing society today: the efficient and fair allocation of resources and costs. Research in allocation has previously considered simple, abstract models like cake cutting. We propose to develop richer models that capture important new features like asynchronicity which occur in many markets being developed in our highly connected and online world. The mechanisms currently used to allocate resources and costs are limited to these simple, abstract models and also do not take into account how people actually behave in practice. We will therefore design new mechanisms for these richer allocation problems that exploit insights gained from behavioural game theory like loss aversion. We will also tackle the complexity of these rich models and mechanisms with computational tools. Finally, we will use computation to increase both the efficiency and fairness of allocations. As a result, we will be able to do more with fewer resources and greater fairness. Our initial case studies in resource and cost allocation demonstrate that we can improve efficiency greatly, offering one company alone savings of up to 10% (which is worth tens of millions of dollars every year). We predict even greater impact with the more sophisticated mechanisms to be developed during the course of this project.
Max ERC Funding
2 499 681 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym AMPLIFY
Project Amplifying Human Perception Through Interactive Digital Technologies
Researcher (PI) Albrecht Schmidt
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Summary
Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Max ERC Funding
1 925 250 €
Duration
Start date: 2016-07-01, End date: 2021-06-30
Project acronym ANTICIPATE
Project Anticipatory Human-Computer Interaction
Researcher (PI) Andreas BULLING
Host Institution (HI) UNIVERSITAET STUTTGART
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Even after three decades of research on human-computer interaction (HCI), current general-purpose user interfaces (UI) still lack the ability to attribute mental states to their users, i.e. they fail to understand users' intentions and needs and to anticipate their actions. This drastically restricts their interactive capabilities.
ANTICIPATE aims to establish the scientific foundations for a new generation of user interfaces that pro-actively adapt to users' future input actions by monitoring their attention and predicting their interaction intentions - thereby significantly improving the naturalness, efficiency, and user experience of the interactions. Realising this vision of anticipatory human-computer interaction requires groundbreaking advances in everyday sensing of user attention from eye and brain activity. We will further pioneer methods to predict entangled user intentions and forecast interactive behaviour with fine temporal granularity during interactions in everyday stationary and mobile settings. Finally, we will develop fundamental interaction paradigms that enable anticipatory UIs to pro-actively adapt to users' attention and intentions in a mindful way. The new capabilities will be demonstrated in four challenging cases: 1) mobile information retrieval, 2) intelligent notification management, 3) Autism diagnosis and monitoring, and 4) computer-based training.
Anticipatory human-computer interaction offers a strong complement to existing UI paradigms that only react to user input post-hoc. If successful, ANTICIPATE will deliver the first important building blocks for implementing Theory of Mind in general-purpose UIs. As such, the project has the potential to drastically improve the billions of interactions we perform with computers every day, to trigger a wide range of follow-up research in HCI as well as adjacent areas within and outside computer science, and to act as a key technical enabler for new applications, e.g. in healthcare and education.
Summary
Even after three decades of research on human-computer interaction (HCI), current general-purpose user interfaces (UI) still lack the ability to attribute mental states to their users, i.e. they fail to understand users' intentions and needs and to anticipate their actions. This drastically restricts their interactive capabilities.
ANTICIPATE aims to establish the scientific foundations for a new generation of user interfaces that pro-actively adapt to users' future input actions by monitoring their attention and predicting their interaction intentions - thereby significantly improving the naturalness, efficiency, and user experience of the interactions. Realising this vision of anticipatory human-computer interaction requires groundbreaking advances in everyday sensing of user attention from eye and brain activity. We will further pioneer methods to predict entangled user intentions and forecast interactive behaviour with fine temporal granularity during interactions in everyday stationary and mobile settings. Finally, we will develop fundamental interaction paradigms that enable anticipatory UIs to pro-actively adapt to users' attention and intentions in a mindful way. The new capabilities will be demonstrated in four challenging cases: 1) mobile information retrieval, 2) intelligent notification management, 3) Autism diagnosis and monitoring, and 4) computer-based training.
Anticipatory human-computer interaction offers a strong complement to existing UI paradigms that only react to user input post-hoc. If successful, ANTICIPATE will deliver the first important building blocks for implementing Theory of Mind in general-purpose UIs. As such, the project has the potential to drastically improve the billions of interactions we perform with computers every day, to trigger a wide range of follow-up research in HCI as well as adjacent areas within and outside computer science, and to act as a key technical enabler for new applications, e.g. in healthcare and education.
Max ERC Funding
1 499 625 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym APEG
Project Algorithmic Performance Guarantees: Foundations and Applications
Researcher (PI) Susanne ALBERS
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary Optimization problems are ubiquitous in computer science. Almost every problem involves the optimization of some objective function. However a major part of these problems cannot be solved to optimality. Therefore, algorithms that achieve provably good performance guarantees are of immense importance. Considerable progress has already been made, but great challenges remain: Some fundamental problems are not well understood. Moreover, for central problems arising in new applications, no solutions are known at all.
The goal of APEG is to significantly advance the state of the art on algorithmic performance guarantees. Specifically, the project has two missions: First, it will develop new algorithmic techniques, breaking new ground in the areas of online algorithms, approximations algorithms and algorithmic game theory. Second, it will apply these techniques to solve fundamental problems that are central in these algorithmic disciplines. APEG will attack long-standing open problems, some of which have been unresolved for several decades. Furthermore, it will formulate and investigate new algorithmic problems that arise in modern applications. The research agenda encompasses a broad spectrum of classical and timely topics including (a) resource allocation in computer systems, (b) data structuring, (c) graph problems, with relations to Internet advertising, (d) complex networks and (e) massively parallel systems. In addition to basic optimization objectives, the project will also study the new performance metric of energy minimization in computer systems.
Overall, APEG pursues cutting-edge algorithms research, focusing on both foundational problems and applications. Any progress promises to be a breakthrough or significant contribution.
Summary
Optimization problems are ubiquitous in computer science. Almost every problem involves the optimization of some objective function. However a major part of these problems cannot be solved to optimality. Therefore, algorithms that achieve provably good performance guarantees are of immense importance. Considerable progress has already been made, but great challenges remain: Some fundamental problems are not well understood. Moreover, for central problems arising in new applications, no solutions are known at all.
The goal of APEG is to significantly advance the state of the art on algorithmic performance guarantees. Specifically, the project has two missions: First, it will develop new algorithmic techniques, breaking new ground in the areas of online algorithms, approximations algorithms and algorithmic game theory. Second, it will apply these techniques to solve fundamental problems that are central in these algorithmic disciplines. APEG will attack long-standing open problems, some of which have been unresolved for several decades. Furthermore, it will formulate and investigate new algorithmic problems that arise in modern applications. The research agenda encompasses a broad spectrum of classical and timely topics including (a) resource allocation in computer systems, (b) data structuring, (c) graph problems, with relations to Internet advertising, (d) complex networks and (e) massively parallel systems. In addition to basic optimization objectives, the project will also study the new performance metric of energy minimization in computer systems.
Overall, APEG pursues cutting-edge algorithms research, focusing on both foundational problems and applications. Any progress promises to be a breakthrough or significant contribution.
Max ERC Funding
2 404 250 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym ARCA
Project Analysis and Representation of Complex Activities in Videos
Researcher (PI) Juergen Gall
Host Institution (HI) RHEINISCHE FRIEDRICH-WILHELMS-UNIVERSITAT BONN
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of the project is to automatically analyse human activities observed in videos. Any solution to this problem will allow the development of novel applications. It could be used to create short videos that summarize daily activities to support patients suffering from Alzheimer's disease. It could also be used for education, e.g., by providing a video analysis for a trainee in the hospital that shows if the tasks have been correctly executed.
The analysis of complex activities in videos, however, is very challenging since activities vary in temporal duration between minutes and hours, involve interactions with several objects that change their appearance and shape, e.g., food during cooking, and are composed of many sub-activities, which can happen at the same time or in various orders.
While the majority of recent works in action recognition focuses on developing better feature encoding techniques for classifying sub-activities in short video clips of a few seconds, this project moves forward and aims to develop a higher level representation of complex activities to overcome the limitations of current approaches. This includes the handling of large time variations and the ability to recognize and locate complex activities in videos. To this end, we aim to develop a unified model that provides detailed information about the activities and sub-activities in terms of time and spatial location, as well as involved pose motion, objects and their transformations.
Another aspect of the project is to learn a representation from videos that is not tied to a specific source of videos or limited to a specific application. Instead we aim to learn a representation that is invariant to a perspective change, e.g., from a third-person perspective to an egocentric perspective, and can be applied to various modalities like videos or depth data without the need of collecting massive training data for all modalities. In other words, we aim to learn the essence of activities.
Summary
The goal of the project is to automatically analyse human activities observed in videos. Any solution to this problem will allow the development of novel applications. It could be used to create short videos that summarize daily activities to support patients suffering from Alzheimer's disease. It could also be used for education, e.g., by providing a video analysis for a trainee in the hospital that shows if the tasks have been correctly executed.
The analysis of complex activities in videos, however, is very challenging since activities vary in temporal duration between minutes and hours, involve interactions with several objects that change their appearance and shape, e.g., food during cooking, and are composed of many sub-activities, which can happen at the same time or in various orders.
While the majority of recent works in action recognition focuses on developing better feature encoding techniques for classifying sub-activities in short video clips of a few seconds, this project moves forward and aims to develop a higher level representation of complex activities to overcome the limitations of current approaches. This includes the handling of large time variations and the ability to recognize and locate complex activities in videos. To this end, we aim to develop a unified model that provides detailed information about the activities and sub-activities in terms of time and spatial location, as well as involved pose motion, objects and their transformations.
Another aspect of the project is to learn a representation from videos that is not tied to a specific source of videos or limited to a specific application. Instead we aim to learn a representation that is invariant to a perspective change, e.g., from a third-person perspective to an egocentric perspective, and can be applied to various modalities like videos or depth data without the need of collecting massive training data for all modalities. In other words, we aim to learn the essence of activities.
Max ERC Funding
1 499 875 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym ASTROLAB
Project Cold Collisions and the Pathways Toward Life in Interstellar Space
Researcher (PI) Holger Kreckel
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Starting Grant (StG), PE9, ERC-2012-StG_20111012
Summary Modern telescopes like Herschel and ALMA open up a new window into molecular astrophysics to investigate a surprisingly rich chemistry that operates even at low densities and low temperatures. Observations with these instruments have the potential of unraveling key questions of astrobiology, like the accumulation of water and pre-biotic organic molecules on (exo)planets from asteroids and comets. Hand-in-hand with the heightened observational activities comes a strong demand for a thorough understanding of the molecular formation mechanisms. The vast majority of interstellar molecules are formed in ion-neutral reactions that remain efficient even at low temperatures. Unfortunately, the unusual nature of these processes under terrestrial conditions makes their laboratory study extremely difficult.
To address these issues, I propose to build a versatile merged beams setup for laboratory studies of ion-neutral collisions at the Cryogenic Storage Ring (CSR), the most ambitious of the next-generation storage devices under development worldwide. With this experimental setup, I will make use of a low-temperature and low-density environment that is ideal to simulate the conditions prevailing in interstellar space. The cryogenic surrounding, in combination with laser-generated ground state atom beams, will allow me to perform precise energy-resolved rate coefficient measurements for reactions between cold molecular ions (like, e.g., H2+, H3+, HCO+, CH2+, CH3+, etc.) and neutral atoms (H, D, C or O) in order to shed light on long-standing problems of astrochemistry and the formation of organic molecules in space.
With the large variability of the collision energy (corresponding to 40-40000 K), I will be able to provide data that are crucial for the interpretation of molecular observations in a variety of objects, ranging from cold molecular clouds to warm layers in protoplanetary disks.
Summary
Modern telescopes like Herschel and ALMA open up a new window into molecular astrophysics to investigate a surprisingly rich chemistry that operates even at low densities and low temperatures. Observations with these instruments have the potential of unraveling key questions of astrobiology, like the accumulation of water and pre-biotic organic molecules on (exo)planets from asteroids and comets. Hand-in-hand with the heightened observational activities comes a strong demand for a thorough understanding of the molecular formation mechanisms. The vast majority of interstellar molecules are formed in ion-neutral reactions that remain efficient even at low temperatures. Unfortunately, the unusual nature of these processes under terrestrial conditions makes their laboratory study extremely difficult.
To address these issues, I propose to build a versatile merged beams setup for laboratory studies of ion-neutral collisions at the Cryogenic Storage Ring (CSR), the most ambitious of the next-generation storage devices under development worldwide. With this experimental setup, I will make use of a low-temperature and low-density environment that is ideal to simulate the conditions prevailing in interstellar space. The cryogenic surrounding, in combination with laser-generated ground state atom beams, will allow me to perform precise energy-resolved rate coefficient measurements for reactions between cold molecular ions (like, e.g., H2+, H3+, HCO+, CH2+, CH3+, etc.) and neutral atoms (H, D, C or O) in order to shed light on long-standing problems of astrochemistry and the formation of organic molecules in space.
With the large variability of the collision energy (corresponding to 40-40000 K), I will be able to provide data that are crucial for the interpretation of molecular observations in a variety of objects, ranging from cold molecular clouds to warm layers in protoplanetary disks.
Max ERC Funding
1 486 800 €
Duration
Start date: 2012-09-01, End date: 2017-11-30
Project acronym AV-SMP
Project Algorithmic Verification of String Manipulating Programs
Researcher (PI) Anthony LIN
Host Institution (HI) TECHNISCHE UNIVERSITAET KAISERSLAUTERN
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary String is among the most fundamental and commonly used data types in virtually all modern programming languages, especially with the rapidly growing popularity of scripting languages (e.g. JavaScript and Python). Programs written in such languages tend to perform heavy string manipulations, which are complex to reason about and could easily lead to programming mistakes. In some cases, such mistakes could have serious consequences, e.g., in the case of client-side web applications, cross-site scripting (XSS) attacks that could lead to a security breach by a malicious user.
The central objective of the proposed project is to develop novel verification algorithms for analysing the correctness (esp. with respect to safety and termination properties) of programs with string variables, and transform them into robust verification tools. To meet this key objective, we will make fundamental breakthroughs on both theoretical and tool implementation challenges. On the theoretical side, we address two important problems: (1) design expressive constraint languages over strings (in combination with other data types like integers) that permit decidability with good complexity, and (2) design generic semi-algorithms for verifying string programs that have strong theoretical performance guarantee. On the implementation side, we will address the challenging problem of designing novel implementation methods that can substantially speed up the basic string analysis procedures in practice. Finally, as a proof of concept, we will apply our technologies to two key application domains: (1) automatic detection of XSS vulnerabilities in web applications, and (2) automatic grading systems for a programming course.
The project will not only make fundamental theoretical contributions — potentially solving long-standing open problems in the area — but also yield powerful methods that can be used in various applications.
Summary
String is among the most fundamental and commonly used data types in virtually all modern programming languages, especially with the rapidly growing popularity of scripting languages (e.g. JavaScript and Python). Programs written in such languages tend to perform heavy string manipulations, which are complex to reason about and could easily lead to programming mistakes. In some cases, such mistakes could have serious consequences, e.g., in the case of client-side web applications, cross-site scripting (XSS) attacks that could lead to a security breach by a malicious user.
The central objective of the proposed project is to develop novel verification algorithms for analysing the correctness (esp. with respect to safety and termination properties) of programs with string variables, and transform them into robust verification tools. To meet this key objective, we will make fundamental breakthroughs on both theoretical and tool implementation challenges. On the theoretical side, we address two important problems: (1) design expressive constraint languages over strings (in combination with other data types like integers) that permit decidability with good complexity, and (2) design generic semi-algorithms for verifying string programs that have strong theoretical performance guarantee. On the implementation side, we will address the challenging problem of designing novel implementation methods that can substantially speed up the basic string analysis procedures in practice. Finally, as a proof of concept, we will apply our technologies to two key application domains: (1) automatic detection of XSS vulnerabilities in web applications, and (2) automatic grading systems for a programming course.
The project will not only make fundamental theoretical contributions — potentially solving long-standing open problems in the area — but also yield powerful methods that can be used in various applications.
Max ERC Funding
1 496 687 €
Duration
Start date: 2017-11-01, End date: 2022-10-31