Project acronym ACUITY
Project Algorithms for coping with uncertainty and intractability
Researcher (PI) Nikhil Bansal
Host Institution (HI) TECHNISCHE UNIVERSITEIT EINDHOVEN
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary The two biggest challenges in solving practical optimization problems are computational intractability, and the presence
of uncertainty: most problems are either NP-hard, or have incomplete input data which
makes an exact computation impossible.
Recently, there has been a huge progress in our understanding of intractability, based on spectacular algorithmic and lower bound techniques. For several problems, especially those with only local constraints, we can design optimum
approximation algorithms that are provably the best possible.
However, typical optimization problems usually involve complex global constraints and are much less understood. The situation is even worse for coping with uncertainty. Most of the algorithms are based on ad-hoc techniques and there is no deeper understanding of what makes various problems easy or hard.
This proposal describes several new directions, together with concrete intermediate goals, that will break important new ground in the theory of approximation and online algorithms. The particular directions we consider are (i) extend the primal dual method to systematically design online algorithms, (ii) build a structural theory of online problems based on work functions, (iii) develop new tools to use the power of strong convex relaxations and (iv) design new algorithmic approaches based on non-constructive proof techniques.
The proposed research is at the
cutting edge of algorithm design, and builds upon the recent success of the PI in resolving several longstanding questions in these areas. Any progress is likely to be a significant contribution to theoretical
computer science and combinatorial optimization.
Summary
The two biggest challenges in solving practical optimization problems are computational intractability, and the presence
of uncertainty: most problems are either NP-hard, or have incomplete input data which
makes an exact computation impossible.
Recently, there has been a huge progress in our understanding of intractability, based on spectacular algorithmic and lower bound techniques. For several problems, especially those with only local constraints, we can design optimum
approximation algorithms that are provably the best possible.
However, typical optimization problems usually involve complex global constraints and are much less understood. The situation is even worse for coping with uncertainty. Most of the algorithms are based on ad-hoc techniques and there is no deeper understanding of what makes various problems easy or hard.
This proposal describes several new directions, together with concrete intermediate goals, that will break important new ground in the theory of approximation and online algorithms. The particular directions we consider are (i) extend the primal dual method to systematically design online algorithms, (ii) build a structural theory of online problems based on work functions, (iii) develop new tools to use the power of strong convex relaxations and (iv) design new algorithmic approaches based on non-constructive proof techniques.
The proposed research is at the
cutting edge of algorithm design, and builds upon the recent success of the PI in resolving several longstanding questions in these areas. Any progress is likely to be a significant contribution to theoretical
computer science and combinatorial optimization.
Max ERC Funding
1 519 285 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym AI4REASON
Project Artificial Intelligence for Large-Scale Computer-Assisted Reasoning
Researcher (PI) Josef Urban
Host Institution (HI) CESKE VYSOKE UCENI TECHNICKE V PRAZE
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary The goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends.
It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community.
The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.
Summary
The goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends.
It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community.
The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.
Max ERC Funding
1 499 500 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym ALGSTRONGCRYPTO
Project Algebraic Methods for Stronger Crypto
Researcher (PI) Ronald John Fitzgerald CRAMER
Host Institution (HI) STICHTING NEDERLANDSE WETENSCHAPPELIJK ONDERZOEK INSTITUTEN
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Our field is cryptology. Our overarching objective is to advance significantly the frontiers in
design and analysis of high-security cryptography for the future generation.
Particularly, we wish to enhance the efficiency, functionality, and, last-but-not-least, fundamental understanding of cryptographic security against very powerful adversaries.
Our approach here is to develop completely novel methods by
deepening, strengthening and broadening the
algebraic foundations of the field.
Concretely, our lens builds on
the arithmetic codex. This is a general, abstract cryptographic primitive whose basic theory we recently developed and whose asymptotic part, which relies on algebraic geometry, enjoys crucial applications in surprising foundational results on constant communication-rate two-party cryptography. A codex is a linear (error correcting) code that, when endowing its ambient vector space just with coordinate-wise multiplication, can be viewed as simulating, up to some degree, richer arithmetical structures such as finite fields (or products thereof), or generally, finite-dimensional algebras over finite fields. Besides this degree, coordinate-localities for which simulation holds and for which it does not at all are also captured.
Our method is based on novel perspectives on codices which significantly
widen their scope and strengthen their utility. Particularly, we bring
symmetries, computational- and complexity theoretic aspects, and connections with algebraic number theory, -geometry, and -combinatorics into play in novel ways. Our applications range from public-key cryptography to secure multi-party computation.
Our proposal is subdivided into 3 interconnected modules:
(1) Algebraic- and Number Theoretical Cryptanalysis
(2) Construction of Algebraic Crypto Primitives
(3) Advanced Theory of Arithmetic Codices
Summary
Our field is cryptology. Our overarching objective is to advance significantly the frontiers in
design and analysis of high-security cryptography for the future generation.
Particularly, we wish to enhance the efficiency, functionality, and, last-but-not-least, fundamental understanding of cryptographic security against very powerful adversaries.
Our approach here is to develop completely novel methods by
deepening, strengthening and broadening the
algebraic foundations of the field.
Concretely, our lens builds on
the arithmetic codex. This is a general, abstract cryptographic primitive whose basic theory we recently developed and whose asymptotic part, which relies on algebraic geometry, enjoys crucial applications in surprising foundational results on constant communication-rate two-party cryptography. A codex is a linear (error correcting) code that, when endowing its ambient vector space just with coordinate-wise multiplication, can be viewed as simulating, up to some degree, richer arithmetical structures such as finite fields (or products thereof), or generally, finite-dimensional algebras over finite fields. Besides this degree, coordinate-localities for which simulation holds and for which it does not at all are also captured.
Our method is based on novel perspectives on codices which significantly
widen their scope and strengthen their utility. Particularly, we bring
symmetries, computational- and complexity theoretic aspects, and connections with algebraic number theory, -geometry, and -combinatorics into play in novel ways. Our applications range from public-key cryptography to secure multi-party computation.
Our proposal is subdivided into 3 interconnected modules:
(1) Algebraic- and Number Theoretical Cryptanalysis
(2) Construction of Algebraic Crypto Primitives
(3) Advanced Theory of Arithmetic Codices
Max ERC Funding
2 447 439 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym APPROXNP
Project Approximation of NP-hard optimization problems
Researcher (PI) Johan Håstad
Host Institution (HI) KUNGLIGA TEKNISKA HOEGSKOLAN
Call Details Advanced Grant (AdG), PE6, ERC-2008-AdG
Summary The proposed project aims to create a center of excellence that aims at understanding the approximability of NP-hard optimization problems. In particular, for central problems like vertex cover, coloring of graphs, and various constraint satisfaction problems we want to study upper and lower bounds on how well they can be approximated in polynomial time. Many existing strong results are based on what is known as the Unique Games Conjecture (UGC) and a significant part of the project will be devoted to studying this conjecture. We expect that a major step needed to be taken in this process is to further develop the understanding of Boolean functions on the Boolean hypercube. We anticipate that the tools needed for this will come in the form of harmonic analysis which in its turn will rely on the corresponding results in the analysis of functions over the domain of real numbers.
Summary
The proposed project aims to create a center of excellence that aims at understanding the approximability of NP-hard optimization problems. In particular, for central problems like vertex cover, coloring of graphs, and various constraint satisfaction problems we want to study upper and lower bounds on how well they can be approximated in polynomial time. Many existing strong results are based on what is known as the Unique Games Conjecture (UGC) and a significant part of the project will be devoted to studying this conjecture. We expect that a major step needed to be taken in this process is to further develop the understanding of Boolean functions on the Boolean hypercube. We anticipate that the tools needed for this will come in the form of harmonic analysis which in its turn will rely on the corresponding results in the analysis of functions over the domain of real numbers.
Max ERC Funding
2 376 000 €
Duration
Start date: 2009-01-01, End date: 2014-12-31
Project acronym AsthmaVir
Project The roles of innate lymphoid cells and rhinovirus in asthma exacerbations
Researcher (PI) Hergen Spits
Host Institution (HI) ACADEMISCH MEDISCH CENTRUM BIJ DE UNIVERSITEIT VAN AMSTERDAM
Call Details Advanced Grant (AdG), LS6, ERC-2013-ADG
Summary Asthma exacerbations represent a high unmet medical need in particular in young children. Human Rhinoviruses (HRV) are the main triggers of these exacerbations. Till now Th2 cells were considered the main initiating effector cell type in asthma in general and asthma exacerbations in particular. However, exaggerated Th2 cell activities alone do not explain all aspects of asthma and exacerbations. Building on our recent discovery of type 2 human innate lymphoid cells (ILC2) capable of promptly producing high amounts of IL-5, IL-9 and IL-13 upon activation and on mouse data pointing to an essential role of these cells in asthma and asthma exacerbations, ILC2 may be the main initiating cells in asthma exacerbations in humans. Thus we hypothesize that HRV directly or indirectly stimulate ILC2s to produce cytokines driving the effector functions leading to the end organ effects that characterize this debilitating disease. Targeting ILC2 and HRV in parallel will provide a highly attractive therapeutic option for the treatment of asthma exacerbations. In depth study of the mechanisms of ILC2 differentiation and function will lead to the design effective drugs targeting these cells; thus the first two objectives of this project are: 1) To unravel the lineage relationship of ILC populations and to decipher the signal transduction pathways that regulate the function of ILCs, 2) to test the functions of lung-residing human ILCs and the effects of compounds that affect these functions in mice which harbour a human immune system and human lung epithelium under homeostatic conditions and after infections with respiratory viruses. The third objective of this project is developing reagents that target HRV; to this end we will develop broadly reacting highly neutralizing human monoclonal antibodies that can be used for prophylaxis and therapy of patients at high risk for developing severe asthma exacerbations.
Summary
Asthma exacerbations represent a high unmet medical need in particular in young children. Human Rhinoviruses (HRV) are the main triggers of these exacerbations. Till now Th2 cells were considered the main initiating effector cell type in asthma in general and asthma exacerbations in particular. However, exaggerated Th2 cell activities alone do not explain all aspects of asthma and exacerbations. Building on our recent discovery of type 2 human innate lymphoid cells (ILC2) capable of promptly producing high amounts of IL-5, IL-9 and IL-13 upon activation and on mouse data pointing to an essential role of these cells in asthma and asthma exacerbations, ILC2 may be the main initiating cells in asthma exacerbations in humans. Thus we hypothesize that HRV directly or indirectly stimulate ILC2s to produce cytokines driving the effector functions leading to the end organ effects that characterize this debilitating disease. Targeting ILC2 and HRV in parallel will provide a highly attractive therapeutic option for the treatment of asthma exacerbations. In depth study of the mechanisms of ILC2 differentiation and function will lead to the design effective drugs targeting these cells; thus the first two objectives of this project are: 1) To unravel the lineage relationship of ILC populations and to decipher the signal transduction pathways that regulate the function of ILCs, 2) to test the functions of lung-residing human ILCs and the effects of compounds that affect these functions in mice which harbour a human immune system and human lung epithelium under homeostatic conditions and after infections with respiratory viruses. The third objective of this project is developing reagents that target HRV; to this end we will develop broadly reacting highly neutralizing human monoclonal antibodies that can be used for prophylaxis and therapy of patients at high risk for developing severe asthma exacerbations.
Max ERC Funding
2 499 593 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym Born-Immune
Project Shaping of the Human Immune System by Primal Environmental Exposures In the Newborn Child
Researcher (PI) Klas Erik Petter Brodin
Host Institution (HI) KAROLINSKA INSTITUTET
Call Details Starting Grant (StG), LS6, ERC-2015-STG
Summary Immune systems are highly variable, but the sources of this variation are poorly understood. Genetic variation only explains a minor fraction of this, and we are unable to accurately predict the risk of immune mediated disease or severe infection in any given individual. I recently found that immune cells and proteins in healthy twins vary because of non-heritable influences (infections, vaccines, microbiota etc.), with only minor influences from heritable factors (Brodin, et al, Cell 2015). When and how such environmental influences shape our immune system is now important to understand. Birth represents the most transformational change in environment during the life of any individual. I propose, that environmental influences at birth, and during the first months of life could be particularly influential by imprinting on the regulatory mechanisms forming in the developing immune system. Adaptive changes in immune cell frequencies and functional states induced by early-life exposures could determine both the immune competence of the newborn, but potentially also its long-term trajectory towards immunological health or disease. Here, I propose a study of 1000 newborn children, followed longitudinally during their first 1000 days of life. By monitoring immune profiles and recording many environmental influences, we hope to understand how early life exposures can influence human immune system development. We have established a new assay based on Mass Cytometry and necessary data analysis tools (Brodin, et al, PNAS 2014), to simultaneously monitor the frequencies, phenotypes and functional states of more than 200 blood immune cell populations from only 100 microliters of blood. By monitoring environmental influences at regular follow-up visits, by questionnaires, serum measurements of infection, and gut microbiome sequencing, we aim to provide the most comprehensive analysis to date of immune system development in newborn children.
Summary
Immune systems are highly variable, but the sources of this variation are poorly understood. Genetic variation only explains a minor fraction of this, and we are unable to accurately predict the risk of immune mediated disease or severe infection in any given individual. I recently found that immune cells and proteins in healthy twins vary because of non-heritable influences (infections, vaccines, microbiota etc.), with only minor influences from heritable factors (Brodin, et al, Cell 2015). When and how such environmental influences shape our immune system is now important to understand. Birth represents the most transformational change in environment during the life of any individual. I propose, that environmental influences at birth, and during the first months of life could be particularly influential by imprinting on the regulatory mechanisms forming in the developing immune system. Adaptive changes in immune cell frequencies and functional states induced by early-life exposures could determine both the immune competence of the newborn, but potentially also its long-term trajectory towards immunological health or disease. Here, I propose a study of 1000 newborn children, followed longitudinally during their first 1000 days of life. By monitoring immune profiles and recording many environmental influences, we hope to understand how early life exposures can influence human immune system development. We have established a new assay based on Mass Cytometry and necessary data analysis tools (Brodin, et al, PNAS 2014), to simultaneously monitor the frequencies, phenotypes and functional states of more than 200 blood immune cell populations from only 100 microliters of blood. By monitoring environmental influences at regular follow-up visits, by questionnaires, serum measurements of infection, and gut microbiome sequencing, we aim to provide the most comprehensive analysis to date of immune system development in newborn children.
Max ERC Funding
1 422 339 €
Duration
Start date: 2016-07-01, End date: 2021-06-30
Project acronym CAAXPROCESSINGHUMDIS
Project CAAX Protein Processing in Human DIsease: From Cancer to Progeria
Researcher (PI) Martin Olof Bergö
Host Institution (HI) GOETEBORGS UNIVERSITET
Call Details Starting Grant (StG), LS6, ERC-2007-StG
Summary My objective is to understand the physiologic and medical importance of the posttranslational processing of CAAX proteins (e.g., K-RAS and prelamin A) and to define the suitability of the CAAX protein processing enzymes as therapeutic targets for the treatment of cancer and progeria. CAAX proteins undergo three posttranslational processing steps at a carboxyl-terminal CAAX motif. These processing steps, which are mediated by four different enzymes (FTase, GGTase-I, RCE1, and ICMT), increase the hydrophobicity of the carboxyl terminus of the protein and thereby facilitate interactions with membrane surfaces. Somatic mutations in K-RAS deregulate cell growth and are etiologically involved in the pathogenesis of many forms of cancer. A mutation in prelamin A causes Hutchinson-Gilford progeria syndrome—a pediatric progeroid syndrome associated with misshaped cell nuclei and a host of aging-like disease phenotypes. One strategy to render the mutant K-RAS and prelamin A less harmful is to interfere with their ability to bind to membrane surfaces (e.g., the plasma membrane and the nuclear envelope). This could be accomplished by inhibiting the enzymes that modify the CAAX motif. My Specific Aims are: (1) To define the suitability of the CAAX processing enzymes as therapeutic targets in the treatment of K-RAS-induced lung cancer and leukemia; and (2) To test the hypothesis that inactivation of FTase or ICMT will ameliorate disease phenotypes of progeria. I have developed genetic strategies to produce lung cancer or leukemia in mice by activating an oncogenic K-RAS and simultaneously inactivating different CAAX processing enzymes. I will also inactivate several CAAX processing enzymes in mice with progeria—both before the emergence of phenotypes and after the development of advanced disease phenotypes. These experiments should reveal whether the absence of the different CAAX processing enzymes affects the onset, progression, or regression of cancer and progeria.
Summary
My objective is to understand the physiologic and medical importance of the posttranslational processing of CAAX proteins (e.g., K-RAS and prelamin A) and to define the suitability of the CAAX protein processing enzymes as therapeutic targets for the treatment of cancer and progeria. CAAX proteins undergo three posttranslational processing steps at a carboxyl-terminal CAAX motif. These processing steps, which are mediated by four different enzymes (FTase, GGTase-I, RCE1, and ICMT), increase the hydrophobicity of the carboxyl terminus of the protein and thereby facilitate interactions with membrane surfaces. Somatic mutations in K-RAS deregulate cell growth and are etiologically involved in the pathogenesis of many forms of cancer. A mutation in prelamin A causes Hutchinson-Gilford progeria syndrome—a pediatric progeroid syndrome associated with misshaped cell nuclei and a host of aging-like disease phenotypes. One strategy to render the mutant K-RAS and prelamin A less harmful is to interfere with their ability to bind to membrane surfaces (e.g., the plasma membrane and the nuclear envelope). This could be accomplished by inhibiting the enzymes that modify the CAAX motif. My Specific Aims are: (1) To define the suitability of the CAAX processing enzymes as therapeutic targets in the treatment of K-RAS-induced lung cancer and leukemia; and (2) To test the hypothesis that inactivation of FTase or ICMT will ameliorate disease phenotypes of progeria. I have developed genetic strategies to produce lung cancer or leukemia in mice by activating an oncogenic K-RAS and simultaneously inactivating different CAAX processing enzymes. I will also inactivate several CAAX processing enzymes in mice with progeria—both before the emergence of phenotypes and after the development of advanced disease phenotypes. These experiments should reveal whether the absence of the different CAAX processing enzymes affects the onset, progression, or regression of cancer and progeria.
Max ERC Funding
1 689 600 €
Duration
Start date: 2008-06-01, End date: 2013-05-31
Project acronym CAFES
Project Causal Analysis of Feedback Systems
Researcher (PI) Joris Marten Mooij
Host Institution (HI) UNIVERSITEIT VAN AMSTERDAM
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Many questions in science, policy making and everyday life are of a causal nature: how would changing A influence B? Causal inference, a branch of statistics and machine learning, studies how cause-effect relationships can be discovered from data and how these can be used for making predictions in situations where a system has been perturbed by an external intervention. The ability to reliably make such causal predictions is of great value for practical applications in a variety of disciplines. Over the last two decades, remarkable progress has been made in the field. However, even though state-of-the-art causal inference algorithms work well on simulated data when all their assumptions are met, there is still a considerable gap between theory and practice. The goal of CAFES is to bridge that gap by developing theory and algorithms that will enable large-scale applications of causal inference in various challenging domains in science, industry and decision making.
The key challenge that will be addressed is how to deal with cyclic causal relationships ("feedback loops"). Feedback loops are very common in many domains (e.g., biology, economy and climatology), but have mostly been ignored so far in the field. Building on recently established connections between dynamical systems and causal models, CAFES will develop theory and algorithms for causal modeling, reasoning, discovery and prediction for cyclic causal systems. Extensions to stationary and non-stationary processes will be developed to advance the state-of-the-art in causal analysis of time-series data. In order to optimally use available resources, computationally efficient and statistically robust algorithms for causal inference from observational and interventional data in the context of confounders and feedback will be developed. The work will be done with a strong focus on applications in molecular biology, one of the most promising areas for automated causal inference from data.
Summary
Many questions in science, policy making and everyday life are of a causal nature: how would changing A influence B? Causal inference, a branch of statistics and machine learning, studies how cause-effect relationships can be discovered from data and how these can be used for making predictions in situations where a system has been perturbed by an external intervention. The ability to reliably make such causal predictions is of great value for practical applications in a variety of disciplines. Over the last two decades, remarkable progress has been made in the field. However, even though state-of-the-art causal inference algorithms work well on simulated data when all their assumptions are met, there is still a considerable gap between theory and practice. The goal of CAFES is to bridge that gap by developing theory and algorithms that will enable large-scale applications of causal inference in various challenging domains in science, industry and decision making.
The key challenge that will be addressed is how to deal with cyclic causal relationships ("feedback loops"). Feedback loops are very common in many domains (e.g., biology, economy and climatology), but have mostly been ignored so far in the field. Building on recently established connections between dynamical systems and causal models, CAFES will develop theory and algorithms for causal modeling, reasoning, discovery and prediction for cyclic causal systems. Extensions to stationary and non-stationary processes will be developed to advance the state-of-the-art in causal analysis of time-series data. In order to optimally use available resources, computationally efficient and statistically robust algorithms for causal inference from observational and interventional data in the context of confounders and feedback will be developed. The work will be done with a strong focus on applications in molecular biology, one of the most promising areas for automated causal inference from data.
Max ERC Funding
1 405 652 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym CC-MEM
Project Coordination and Composability: The Keys to Efficient Memory System Design
Researcher (PI) David BLACK-SCHAFFER
Host Institution (HI) UPPSALA UNIVERSITET
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary Computer systems today are power limited. As a result, efficiency gains can be translated into performance. Over the past decade we have been so effective at making computation more efficient that we are now at the point where we spend as much energy moving data (from memory to cache to processor) as we do computing the results. And this trend is only becoming worse as we demand more bandwidth for more powerful processors. To improve performance we need to revisit the way we design memory systems from an energy-first perspective, both at the hardware level and by coordinating data movement between hardware and software.
CC-MEM will address memory system efficiency by redesigning low-level hardware and high-level hardware/software integration for energy efficiency. The key novelty is in developing a framework for creating efficient memory systems. This framework will enable researchers and designers to compose solutions to different memory system problems (through a shared exchange of metadata) and coordinate them towards high-level system efficiency goals (through a shared policy framework). Central to this framework is a bilateral exchange of metadata and policy between hardware and software components. This novel communication will open new challenges and opportunities for fine-grained optimizations, system-level efficiency metrics, and more effective divisions of responsibility between hardware and software components.
CC-MEM will change how researchers and designers approach memory system design from today’s ad hoc development of local solutions to one wherein disparate components can be integrated (composed) and driven (coordinated) by system-level metrics. As a result, we will be able to more intelligently manage data, leading to dramatically lower memory system energy and increased performance, and open new possibilities for hardware and software optimizations.
Summary
Computer systems today are power limited. As a result, efficiency gains can be translated into performance. Over the past decade we have been so effective at making computation more efficient that we are now at the point where we spend as much energy moving data (from memory to cache to processor) as we do computing the results. And this trend is only becoming worse as we demand more bandwidth for more powerful processors. To improve performance we need to revisit the way we design memory systems from an energy-first perspective, both at the hardware level and by coordinating data movement between hardware and software.
CC-MEM will address memory system efficiency by redesigning low-level hardware and high-level hardware/software integration for energy efficiency. The key novelty is in developing a framework for creating efficient memory systems. This framework will enable researchers and designers to compose solutions to different memory system problems (through a shared exchange of metadata) and coordinate them towards high-level system efficiency goals (through a shared policy framework). Central to this framework is a bilateral exchange of metadata and policy between hardware and software components. This novel communication will open new challenges and opportunities for fine-grained optimizations, system-level efficiency metrics, and more effective divisions of responsibility between hardware and software components.
CC-MEM will change how researchers and designers approach memory system design from today’s ad hoc development of local solutions to one wherein disparate components can be integrated (composed) and driven (coordinated) by system-level metrics. As a result, we will be able to more intelligently manage data, leading to dramatically lower memory system energy and increased performance, and open new possibilities for hardware and software optimizations.
Max ERC Funding
1 610 000 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym CHOBOTIX
Project Chemical Processing by Swarm Robotics
Researcher (PI) Frantisek Stepanek
Host Institution (HI) VYSOKA SKOLA CHEMICKO-TECHNOLOGICKA V PRAZE
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary The aim of the project is to develop chemical processing systems based on the principle of swarm robotics. The inspiration for swarm robotics comes from the behaviour of collective organisms – such as bees or ants – that can perform complex tasks by the combined actions of a large number of relatively simple, identical agents. The main scientific challenge of the project will be the design and synthesis of chemical swarm robots (“chobots”), which we envisage as internally structured particulate entities in the 10-100 µm size range that can move in their environment, selectively exchange molecules with their surrounding in response to a local change in temperature or concentration, chemically process those molecules and either accumulate or release the product. Such chemically active autonomous entities can be viewed as very simple pre-biotic life forms, although without the ability to self-replicate or evolve. In the course of the project, the following topics will be explored in detail: (i) the synthesis of suitable shells for chemically active swarm robots, both soft (with a flexible membrane) and hard (porous solid shells); (ii) the mechanisms of molecular transport into and out of such shells and means of its active control; (iii) chemical reaction kinetics in spatially complex compartmental structures within the shells; (iv) collective behaviour of chemical swarm robots and their response to external stimuli. The project will be carried out by a multi-disciplinary team of enthusiastic young researchers and the concepts and technologies developed in course of the project, as well as the advancements in the fundamental understanding of the behaviour of “chemical robots” and their functional sub-systems, will open up new opportunities in diverse areas including next-generation distributed chemical processing, synthesis and delivery of personalised medicines, recovery of valuable chemicals from dilute resources, environmental clean-up, and others.
Summary
The aim of the project is to develop chemical processing systems based on the principle of swarm robotics. The inspiration for swarm robotics comes from the behaviour of collective organisms – such as bees or ants – that can perform complex tasks by the combined actions of a large number of relatively simple, identical agents. The main scientific challenge of the project will be the design and synthesis of chemical swarm robots (“chobots”), which we envisage as internally structured particulate entities in the 10-100 µm size range that can move in their environment, selectively exchange molecules with their surrounding in response to a local change in temperature or concentration, chemically process those molecules and either accumulate or release the product. Such chemically active autonomous entities can be viewed as very simple pre-biotic life forms, although without the ability to self-replicate or evolve. In the course of the project, the following topics will be explored in detail: (i) the synthesis of suitable shells for chemically active swarm robots, both soft (with a flexible membrane) and hard (porous solid shells); (ii) the mechanisms of molecular transport into and out of such shells and means of its active control; (iii) chemical reaction kinetics in spatially complex compartmental structures within the shells; (iv) collective behaviour of chemical swarm robots and their response to external stimuli. The project will be carried out by a multi-disciplinary team of enthusiastic young researchers and the concepts and technologies developed in course of the project, as well as the advancements in the fundamental understanding of the behaviour of “chemical robots” and their functional sub-systems, will open up new opportunities in diverse areas including next-generation distributed chemical processing, synthesis and delivery of personalised medicines, recovery of valuable chemicals from dilute resources, environmental clean-up, and others.
Max ERC Funding
1 644 000 €
Duration
Start date: 2008-06-01, End date: 2013-05-31