Project acronym ANAMULTISCALE
Project Analysis of Multiscale Systems Driven by Functionals
Researcher (PI) Alexander Mielke
Host Institution (HI) FORSCHUNGSVERBUND BERLIN EV
Call Details Advanced Grant (AdG), PE1, ERC-2010-AdG_20100224
Summary Many complex phenomena in the sciences are described by nonlinear partial differential equations, the solutions of which exhibit oscillations and concentration effects on multiple temporal or spatial scales. Our aim is to use methods from applied analysis to contribute to the understanding of the interplay of effects on different scales. The central question is to determine those quantities on the microscale which are needed to for the correct description of the macroscopic evolution.
We aim to develop a mathematical framework for analyzing and modeling coupled systems with multiple scales. This will include Hamiltonian dynamics as well as different types of dissipation like gradient flows or rate-independent dynamics. The choice of models will be guided by specific applications in material modeling (e.g., thermoplasticity, pattern formation, porous media) and optoelectronics (pulse interaction, Maxwell-Bloch systems, semiconductors, quantum mechanics). The research will address mathematically fundamental issues like existence and stability of solutions but will mainly be devoted to the modeling of multiscale phenomena in evolution systems. We will focus on systems with geometric structures, where the dynamics is driven by functionals. Thus, we can go much beyond the classical theory of homogenization and singular perturbations. The novel features of our approach are
- the combination of different dynamical effects in one framework,
- the use of geometric and metric structures for coupled partial differential equations,
- the exploitation of Gamma-convergence for evolution systems driven by functionals.
Summary
Many complex phenomena in the sciences are described by nonlinear partial differential equations, the solutions of which exhibit oscillations and concentration effects on multiple temporal or spatial scales. Our aim is to use methods from applied analysis to contribute to the understanding of the interplay of effects on different scales. The central question is to determine those quantities on the microscale which are needed to for the correct description of the macroscopic evolution.
We aim to develop a mathematical framework for analyzing and modeling coupled systems with multiple scales. This will include Hamiltonian dynamics as well as different types of dissipation like gradient flows or rate-independent dynamics. The choice of models will be guided by specific applications in material modeling (e.g., thermoplasticity, pattern formation, porous media) and optoelectronics (pulse interaction, Maxwell-Bloch systems, semiconductors, quantum mechanics). The research will address mathematically fundamental issues like existence and stability of solutions but will mainly be devoted to the modeling of multiscale phenomena in evolution systems. We will focus on systems with geometric structures, where the dynamics is driven by functionals. Thus, we can go much beyond the classical theory of homogenization and singular perturbations. The novel features of our approach are
- the combination of different dynamical effects in one framework,
- the use of geometric and metric structures for coupled partial differential equations,
- the exploitation of Gamma-convergence for evolution systems driven by functionals.
Max ERC Funding
1 390 000 €
Duration
Start date: 2011-04-01, End date: 2017-03-31
Project acronym ANOPTSETCON
Project Analysis of optimal sets and optimal constants: old questions and new results
Researcher (PI) Aldo Pratelli
Host Institution (HI) FRIEDRICH-ALEXANDER-UNIVERSITAET ERLANGEN NUERNBERG
Call Details Starting Grant (StG), PE1, ERC-2010-StG_20091028
Summary The analysis of geometric and functional inequalities naturally leads to consider the extremal cases, thus
looking for optimal sets, or optimal functions, or optimal constants. The most classical examples are the (different versions of the) isoperimetric inequality and the Sobolev-like inequalities. Much is known about equality cases and best constants, but there are still many questions which seem quite natural but yet have no answer. For instance, it is not known, even in the 2-dimensional space, the answer of a question by Brezis: which set,
among those with a given volume, has the biggest Sobolev-Poincaré constant for p=1? This is a very natural problem, and it appears reasonable that the optimal set should be the ball, but this has never been proved. The interest in problems like this relies not only in the extreme simplicity of the questions and in their classical flavour, but also in the new ideas and techniques which are needed to provide the answers.
The main techniques that we aim to use are fine arguments of symmetrization, geometric constructions and tools from mass transportation (which is well known to be deeply connected with functional inequalities). These are the basic tools that we already used to reach, in last years, many results in a specific direction, namely the search of sharp quantitative inequalities. Our first result, together with Fusco and Maggi, showed what follows. Everybody knows that the set which minimizes the perimeter with given volume is the ball.
But is it true that a set which almost minimizes the perimeter must be close to a ball? The question had been posed in the 1920's and many partial result appeared in the years. In our paper (Ann. of Math., 2007) we proved the sharp result. Many other results of this kind were obtained in last two years.
Summary
The analysis of geometric and functional inequalities naturally leads to consider the extremal cases, thus
looking for optimal sets, or optimal functions, or optimal constants. The most classical examples are the (different versions of the) isoperimetric inequality and the Sobolev-like inequalities. Much is known about equality cases and best constants, but there are still many questions which seem quite natural but yet have no answer. For instance, it is not known, even in the 2-dimensional space, the answer of a question by Brezis: which set,
among those with a given volume, has the biggest Sobolev-Poincaré constant for p=1? This is a very natural problem, and it appears reasonable that the optimal set should be the ball, but this has never been proved. The interest in problems like this relies not only in the extreme simplicity of the questions and in their classical flavour, but also in the new ideas and techniques which are needed to provide the answers.
The main techniques that we aim to use are fine arguments of symmetrization, geometric constructions and tools from mass transportation (which is well known to be deeply connected with functional inequalities). These are the basic tools that we already used to reach, in last years, many results in a specific direction, namely the search of sharp quantitative inequalities. Our first result, together with Fusco and Maggi, showed what follows. Everybody knows that the set which minimizes the perimeter with given volume is the ball.
But is it true that a set which almost minimizes the perimeter must be close to a ball? The question had been posed in the 1920's and many partial result appeared in the years. In our paper (Ann. of Math., 2007) we proved the sharp result. Many other results of this kind were obtained in last two years.
Max ERC Funding
540 000 €
Duration
Start date: 2010-08-01, End date: 2015-07-31
Project acronym ANTHOS
Project Analytic Number Theory: Higher Order Structures
Researcher (PI) Valentin Blomer
Host Institution (HI) GEORG-AUGUST-UNIVERSITAT GOTTINGENSTIFTUNG OFFENTLICHEN RECHTS
Call Details Starting Grant (StG), PE1, ERC-2010-StG_20091028
Summary This is a proposal for research at the interface of analytic number theory, automorphic forms and algebraic geometry. Motivated by fundamental conjectures in number theory, classical problems will be investigated in higher order situations: general number fields, automorphic forms on higher rank groups, the arithmetic of algebraic varieties of higher degree. In particular, I want to focus on
- computation of moments of L-function of degree 3 and higher with applications to subconvexity and/or non-vanishing, as well as subconvexity for multiple L-functions;
- bounds for sup-norms of cusp forms on various spaces and equidistribution of Hecke correspondences;
- automorphic forms on higher rank groups and general number fields, in particular new bounds towards the Ramanujan conjecture;
- a proof of Manin's conjecture for a certain class of singular algebraic varieties.
The underlying methods are closely related; for example, rational points on algebraic varieties
will be counted by a multiple L-series technique.
Summary
This is a proposal for research at the interface of analytic number theory, automorphic forms and algebraic geometry. Motivated by fundamental conjectures in number theory, classical problems will be investigated in higher order situations: general number fields, automorphic forms on higher rank groups, the arithmetic of algebraic varieties of higher degree. In particular, I want to focus on
- computation of moments of L-function of degree 3 and higher with applications to subconvexity and/or non-vanishing, as well as subconvexity for multiple L-functions;
- bounds for sup-norms of cusp forms on various spaces and equidistribution of Hecke correspondences;
- automorphic forms on higher rank groups and general number fields, in particular new bounds towards the Ramanujan conjecture;
- a proof of Manin's conjecture for a certain class of singular algebraic varieties.
The underlying methods are closely related; for example, rational points on algebraic varieties
will be counted by a multiple L-series technique.
Max ERC Funding
1 004 000 €
Duration
Start date: 2010-10-01, End date: 2015-09-30
Project acronym CAC
Project Cryptography and Complexity
Researcher (PI) Yuval Ishai
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary Modern cryptography has deeply rooted connections with computational complexity theory and other areas of computer science. This proposal suggests to explore several {\em new connections} between questions in cryptography and questions from other domains, including computational complexity, coding theory, and even the natural sciences. The project is expected to broaden the impact of ideas from cryptography on other domains, and on the other hand to benefit cryptography by applying tools from other domains towards better solutions for central problems in cryptography.
Summary
Modern cryptography has deeply rooted connections with computational complexity theory and other areas of computer science. This proposal suggests to explore several {\em new connections} between questions in cryptography and questions from other domains, including computational complexity, coding theory, and even the natural sciences. The project is expected to broaden the impact of ideas from cryptography on other domains, and on the other hand to benefit cryptography by applying tools from other domains towards better solutions for central problems in cryptography.
Max ERC Funding
1 459 703 €
Duration
Start date: 2010-12-01, End date: 2015-11-30
Project acronym CAP
Project Computers Arguing with People
Researcher (PI) Sarit Kraus
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Advanced Grant (AdG), PE6, ERC-2010-AdG_20100224
Summary An important form of negotiation is argumentation. This is the ability to argue and to persuade the other party to accept a desired agreement, to acquire or give information, to coordinate goals and actions, and to find and verify evidence. This is a key capability in negotiating with humans.
While automated negotiations between software agents can often exchange offers and counteroffers, humans require persuasion. This challenges the design of agents arguing with people, with the objective that the outcome of the negotiation will meet the preferences of the arguer agent.
CAP’s objective is to enable automated agents to argue and persuade humans.
To achieve this, we intend to develop the following key components:
1) The extension of current game theory models of persuasion and bargaining to more realistic settings, 2) Algorithms and heuristics for generation and evaluation of arguments during negotiation with people, 3) Algorithms and heuristics for managing inconsistent views of the negotiation environment, and decision procedures for revelation, signalling, and requesting information, 4) The revision and update of the agent’s mental state and incorporation of social context, 5) Identifying strategies for expressing emotions in negotiations, 6) Technology for general opponent modelling from sparse and noisy data.
To demonstrate the developed methods, we will implement two training systems for people to improve their interviewing capabilities, and for training negotiators in inter-culture negotiations.
CAP will revolutionise the state of the art of automated systems negotiating with people. It will also create breakthroughs in the research of multi-agent systems in general, and will change paradigms by providing new directions for the way computers interact with people.
Summary
An important form of negotiation is argumentation. This is the ability to argue and to persuade the other party to accept a desired agreement, to acquire or give information, to coordinate goals and actions, and to find and verify evidence. This is a key capability in negotiating with humans.
While automated negotiations between software agents can often exchange offers and counteroffers, humans require persuasion. This challenges the design of agents arguing with people, with the objective that the outcome of the negotiation will meet the preferences of the arguer agent.
CAP’s objective is to enable automated agents to argue and persuade humans.
To achieve this, we intend to develop the following key components:
1) The extension of current game theory models of persuasion and bargaining to more realistic settings, 2) Algorithms and heuristics for generation and evaluation of arguments during negotiation with people, 3) Algorithms and heuristics for managing inconsistent views of the negotiation environment, and decision procedures for revelation, signalling, and requesting information, 4) The revision and update of the agent’s mental state and incorporation of social context, 5) Identifying strategies for expressing emotions in negotiations, 6) Technology for general opponent modelling from sparse and noisy data.
To demonstrate the developed methods, we will implement two training systems for people to improve their interviewing capabilities, and for training negotiators in inter-culture negotiations.
CAP will revolutionise the state of the art of automated systems negotiating with people. It will also create breakthroughs in the research of multi-agent systems in general, and will change paradigms by providing new directions for the way computers interact with people.
Max ERC Funding
2 334 057 €
Duration
Start date: 2011-07-01, End date: 2016-06-30
Project acronym COMPCAMERAANALYZ
Project Understanding Designing and Analyzing Computational Cameras
Researcher (PI) Anat Levin
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary Computational cameras go beyond 2D images and allow the extraction of more dimensions from the visual world such as depth, multiple viewpoints and multiple illumination conditions. They also allow us to overcome some of the traditional photography challenges such as defocus blur, motion blur, noise and resolution. The increasing variety of computational cameras is raising the need for a meaningful comparison across camera types. We would like to understand which cameras are better for specific tasks, which aspects of a camera make it better than others and what is the best performance we can hope to achieve.
Our 2008 paper introduced a general framework to address the design and analysis of computational cameras. A camera is modeled as a linear projection in ray space. Decoding the camera data then deals with inverting the linear projection. Since the number of sensor measurements is usually much smaller than the number of rays, the inversion must be treated as a Bayesian inference problem accounting for prior knowledge on the world.
Despite significant progress which has been made in the recent years, the space of computational cameras is still far from being understood.
Computational camera analysis raises the following research challenges: 1) What is a good way to model prior knowledge on ray space? 2) Seeking efficient inference algorithms and robust ways to decode the world from the camera measurements. 3) Evaluating the expected reconstruction accuracy of a given camera. 4) Using the expected reconstruction performance for evaluating and comparing camera types. 5) What is the best camera? Can we derive upper bounds on the optimal performance?
We propose research on all aspects of computational camera design and analysis. We propose new prior models which will significantly simplify the inference and evaluation tasks. We also propose new ways to bound and evaluate computational cameras with existing priors.
Summary
Computational cameras go beyond 2D images and allow the extraction of more dimensions from the visual world such as depth, multiple viewpoints and multiple illumination conditions. They also allow us to overcome some of the traditional photography challenges such as defocus blur, motion blur, noise and resolution. The increasing variety of computational cameras is raising the need for a meaningful comparison across camera types. We would like to understand which cameras are better for specific tasks, which aspects of a camera make it better than others and what is the best performance we can hope to achieve.
Our 2008 paper introduced a general framework to address the design and analysis of computational cameras. A camera is modeled as a linear projection in ray space. Decoding the camera data then deals with inverting the linear projection. Since the number of sensor measurements is usually much smaller than the number of rays, the inversion must be treated as a Bayesian inference problem accounting for prior knowledge on the world.
Despite significant progress which has been made in the recent years, the space of computational cameras is still far from being understood.
Computational camera analysis raises the following research challenges: 1) What is a good way to model prior knowledge on ray space? 2) Seeking efficient inference algorithms and robust ways to decode the world from the camera measurements. 3) Evaluating the expected reconstruction accuracy of a given camera. 4) Using the expected reconstruction performance for evaluating and comparing camera types. 5) What is the best camera? Can we derive upper bounds on the optimal performance?
We propose research on all aspects of computational camera design and analysis. We propose new prior models which will significantly simplify the inference and evaluation tasks. We also propose new ways to bound and evaluate computational cameras with existing priors.
Max ERC Funding
756 845 €
Duration
Start date: 2010-12-01, End date: 2015-11-30
Project acronym CPDENL
Project Control of partial differential equations and nonlinearity
Researcher (PI) Jean-Michel Coron
Host Institution (HI) UNIVERSITE PIERRE ET MARIE CURIE - PARIS 6
Call Details Advanced Grant (AdG), PE1, ERC-2010-AdG_20100224
Summary The aim of this 5,5 years project is to create around the PI a research group on the control of systems modeled by partial differential equations at the Laboratory Jacques-Louis Lions of the UPMC and to develop with this group an intensive research activity focused on nonlinear phenomena.
With the ERC grant, the PI plans to hire post-doc fellows and PhD students, to offer 1-to-3 months positions to confirmed researchers, a regular seminar and workshops.
A lot is known on finite dimensional control systems and linear control systems modeled by partial differential equations. Much less is known for nonlinear control systems modeled by partial differential equations. In particular, in many important cases, one does not know how to use the classical iterated Lie brackets which are so useful to deal with nonlinear control systems in finite dimension.
In this project, the PI plans to develop, with the research group, methods to deal with the problems of controllability and of stabilization for nonlinear systems modeled by partial differential equations, in the case where the nonlinearity plays a crucial role. This is for example the case where the linearized control system around the equilibrium of interest is not controllable or not stabilizable. This is also the case when the nonlinearity is too big at infinity and one looks for global results. This is also the case if the nonlinearity contains too many derivatives. The PI has already introduced some methods to deal with these cases, but a lot remains to be done. Indeed, many natural important and challenging problems are still open. Precise examples, often coming from physics, are given in this proposal.
Summary
The aim of this 5,5 years project is to create around the PI a research group on the control of systems modeled by partial differential equations at the Laboratory Jacques-Louis Lions of the UPMC and to develop with this group an intensive research activity focused on nonlinear phenomena.
With the ERC grant, the PI plans to hire post-doc fellows and PhD students, to offer 1-to-3 months positions to confirmed researchers, a regular seminar and workshops.
A lot is known on finite dimensional control systems and linear control systems modeled by partial differential equations. Much less is known for nonlinear control systems modeled by partial differential equations. In particular, in many important cases, one does not know how to use the classical iterated Lie brackets which are so useful to deal with nonlinear control systems in finite dimension.
In this project, the PI plans to develop, with the research group, methods to deal with the problems of controllability and of stabilization for nonlinear systems modeled by partial differential equations, in the case where the nonlinearity plays a crucial role. This is for example the case where the linearized control system around the equilibrium of interest is not controllable or not stabilizable. This is also the case when the nonlinearity is too big at infinity and one looks for global results. This is also the case if the nonlinearity contains too many derivatives. The PI has already introduced some methods to deal with these cases, but a lot remains to be done. Indeed, many natural important and challenging problems are still open. Precise examples, often coming from physics, are given in this proposal.
Max ERC Funding
1 403 100 €
Duration
Start date: 2011-05-01, End date: 2016-09-30
Project acronym CRYSP
Project CRYSP: A Novel Framework for Collaboratively Building Cryptographically Secure Programs and their Proofs
Researcher (PI) Karthikeyan Bhargavan
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary The field of software security analysis stands at a critical juncture.
Applications have become too large for security experts to examine by hand,
automated verification tools do not scale, and the risks of deploying insecure software are too great to tolerate anything less than mathematical proof.
A radical shift of strategy is needed if programming and analysis techniques are to keep up in a networked world where increasing amounts of governmental and individual information are generated, manipulated, and accessed through web-based software applications.
The basic tenet of this proposal is that the main roadblock to the security verification of a large program is not its size, but rather the lack of precise security specifications for the underlying libraries and security-critical application code. Since, large-scale software is often a collaborative effort, no single programmer knows all the design goals. Hence, this proposal advocates a collaborative specification and verification framework that helps teams of programmers write detailed security specifications incrementally and then verify that they are satisfied by the source program.
The main scientific challenge is to develop new program verification techniques that can be applied collaboratively, incrementally, and modularly to application and library code written in mainstream programming languages. The validation of this approach will be through substantial case studies. Our aim is to produce the first verified open source cryptographic protocol library and the first web applications with formal proofs of security.
The proposed project is bold and ambitious, but it is certainly feasible, and has the potential to change how software security is analyzed for years to come.
Summary
The field of software security analysis stands at a critical juncture.
Applications have become too large for security experts to examine by hand,
automated verification tools do not scale, and the risks of deploying insecure software are too great to tolerate anything less than mathematical proof.
A radical shift of strategy is needed if programming and analysis techniques are to keep up in a networked world where increasing amounts of governmental and individual information are generated, manipulated, and accessed through web-based software applications.
The basic tenet of this proposal is that the main roadblock to the security verification of a large program is not its size, but rather the lack of precise security specifications for the underlying libraries and security-critical application code. Since, large-scale software is often a collaborative effort, no single programmer knows all the design goals. Hence, this proposal advocates a collaborative specification and verification framework that helps teams of programmers write detailed security specifications incrementally and then verify that they are satisfied by the source program.
The main scientific challenge is to develop new program verification techniques that can be applied collaboratively, incrementally, and modularly to application and library code written in mainstream programming languages. The validation of this approach will be through substantial case studies. Our aim is to produce the first verified open source cryptographic protocol library and the first web applications with formal proofs of security.
The proposed project is bold and ambitious, but it is certainly feasible, and has the potential to change how software security is analyzed for years to come.
Max ERC Funding
1 406 726 €
Duration
Start date: 2010-11-01, End date: 2015-10-31
Project acronym CSP-COMPLEXITY
Project Constraint Satisfaction Problems: Algorithms and Complexity
Researcher (PI) Manuel Bodirsky
Host Institution (HI) TECHNISCHE UNIVERSITAET DRESDEN
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary The complexity of Constraint Satisfaction Problems (CSPs) has become a major common research focus of graph theory, artificial intelligence, and finite model theory. A recently discovered connection between the complexity of CSPs on finite domains to central problems in universal algebra led to additional activity in the area.
The goal of this project is to extend the powerful techniques for constraint satisfaction to CSPs with infinite domains. The generalization of CSPs to infinite domains enhances dramatically the range of computational problems that can be analyzed with tools from constraint satisfaction complexity. Many problems from areas that have so far seen no interaction with constraint satisfaction complexity theory can be formulated using infinite domains (and not with finite domains), e.g. in phylogenetic reconstruction, temporal and spatial reasoning, computer algebra, and operations research. It turns out that the search for systematic complexity classification in infinite domain constraint satisfaction often leads to fundamental algorithmic results.
The generalization of constraint satisfaction to infinite domains poses several mathematical challenges: To make the universal algebraic approach work for infinite domain constraint satisfaction we need fundamental concepts from model theory. Luckily, the new mathematical challenges come together with additional strong tools, such as Ramsey theory or results from model theory. The most important challgenges are of an algorithmic nature: finding efficient algorithms for significant constraint languages, but also finding natural classes of problems that can be solved by a given algorithm.
Summary
The complexity of Constraint Satisfaction Problems (CSPs) has become a major common research focus of graph theory, artificial intelligence, and finite model theory. A recently discovered connection between the complexity of CSPs on finite domains to central problems in universal algebra led to additional activity in the area.
The goal of this project is to extend the powerful techniques for constraint satisfaction to CSPs with infinite domains. The generalization of CSPs to infinite domains enhances dramatically the range of computational problems that can be analyzed with tools from constraint satisfaction complexity. Many problems from areas that have so far seen no interaction with constraint satisfaction complexity theory can be formulated using infinite domains (and not with finite domains), e.g. in phylogenetic reconstruction, temporal and spatial reasoning, computer algebra, and operations research. It turns out that the search for systematic complexity classification in infinite domain constraint satisfaction often leads to fundamental algorithmic results.
The generalization of constraint satisfaction to infinite domains poses several mathematical challenges: To make the universal algebraic approach work for infinite domain constraint satisfaction we need fundamental concepts from model theory. Luckily, the new mathematical challenges come together with additional strong tools, such as Ramsey theory or results from model theory. The most important challgenges are of an algorithmic nature: finding efficient algorithms for significant constraint languages, but also finding natural classes of problems that can be solved by a given algorithm.
Max ERC Funding
830 316 €
Duration
Start date: 2011-01-01, End date: 2015-12-31
Project acronym DAL
Project DAL: Defying Amdahl's Law
Researcher (PI) Andre Seznec
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Advanced Grant (AdG), PE6, ERC-2010-AdG_20100224
Summary Multicore processors have now become mainstream for both general-purpose and embedded computing. Instead of working on improving the architecture of the next generation multicore, with the DAL project, we deliberately anticipate the next few generations of multicores.
While multicores featuring 1000's of cores might become feasible around 2020, there are strong indications that sequential programming style will continue to be dominant. Even future mainstream parallel applications will exhibit large sequential sections. Amdahl's law indicates that high performance on these sequential sections is needed to enable overall high performance on the whole application. On many (most) applications, the effective performance of future computer systems using a 1000-core processor chip will significantly depend on their performance on both sequential code sections and single thread.
We envision that, around 2020, the processor chips will feature a few complex cores and many (may be 1000's) simpler, more silicon and power effective cores.
In the DAL research project, we will explore the microarchitecture techniques that will be needed to enable high performance on such heterogeneous processor chips. Very high performance will be required on both sequential sections -legacy sequential codes, sequential sections of parallel applications- and critical threads on parallel applications -e.g. the main thread controlling the application. Our research will focus on enhancing single process performance. On the microarchitecture side, we will explore both a radically new approach, the sequential accelerator, and more conventional processor architectures. We will also study how to exploit heterogeneous multicore architectures to enhance sequential thread performance.
Summary
Multicore processors have now become mainstream for both general-purpose and embedded computing. Instead of working on improving the architecture of the next generation multicore, with the DAL project, we deliberately anticipate the next few generations of multicores.
While multicores featuring 1000's of cores might become feasible around 2020, there are strong indications that sequential programming style will continue to be dominant. Even future mainstream parallel applications will exhibit large sequential sections. Amdahl's law indicates that high performance on these sequential sections is needed to enable overall high performance on the whole application. On many (most) applications, the effective performance of future computer systems using a 1000-core processor chip will significantly depend on their performance on both sequential code sections and single thread.
We envision that, around 2020, the processor chips will feature a few complex cores and many (may be 1000's) simpler, more silicon and power effective cores.
In the DAL research project, we will explore the microarchitecture techniques that will be needed to enable high performance on such heterogeneous processor chips. Very high performance will be required on both sequential sections -legacy sequential codes, sequential sections of parallel applications- and critical threads on parallel applications -e.g. the main thread controlling the application. Our research will focus on enhancing single process performance. On the microarchitecture side, we will explore both a radically new approach, the sequential accelerator, and more conventional processor architectures. We will also study how to exploit heterogeneous multicore architectures to enhance sequential thread performance.
Max ERC Funding
2 398 542 €
Duration
Start date: 2011-04-01, End date: 2016-03-31