Project acronym 1stProposal
Project An alternative development of analytic number theory and applications
Researcher (PI) ANDREW Granville
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Country United Kingdom
Call Details Advanced Grant (AdG), PE1, ERC-2014-ADG
Summary The traditional (Riemann) approach to analytic number theory uses the zeros of zeta functions. This requires the associated multiplicative function, say f(n), to have special enough properties that the associated Dirichlet series may be analytically continued. In this proposal we continue to develop an approach which requires less of the multiplicative function, linking the original question with the mean value of f. Such techniques have been around for a long time but have generally been regarded as “ad hoc”. In this project we aim to show that one can develop a coherent approach to the whole subject, not only reproving all of the old results, but also many new ones that appear inaccessible to traditional methods.
Our first goal is to complete a monograph yielding a reworking of all the classical theory using these new methods and then to push forward in new directions. The most important is to extend these techniques to GL(n) L-functions, which we hope will now be feasible having found the correct framework in which to proceed. Since we rarely know how to analytically continue such L-functions this could be of great benefit to the subject.
We are developing the large sieve so that it can be used for individual moduli, and will determine a strong form of that. Also a new method to give asymptotics for mean values, when they are not too small.
We wish to incorporate techniques of analytic number theory into our theory, for example recent advances on mean values of Dirichlet polynomials. Also the recent breakthroughs on the sieve suggest strong links that need further exploration.
Additive combinatorics yields important results in many areas. There are strong analogies between its results, and those for multiplicative functions, especially in large value spectrum theory, and its applications. We hope to develop these further.
Much of this is joint work with K Soundararajan of Stanford University.
Summary
The traditional (Riemann) approach to analytic number theory uses the zeros of zeta functions. This requires the associated multiplicative function, say f(n), to have special enough properties that the associated Dirichlet series may be analytically continued. In this proposal we continue to develop an approach which requires less of the multiplicative function, linking the original question with the mean value of f. Such techniques have been around for a long time but have generally been regarded as “ad hoc”. In this project we aim to show that one can develop a coherent approach to the whole subject, not only reproving all of the old results, but also many new ones that appear inaccessible to traditional methods.
Our first goal is to complete a monograph yielding a reworking of all the classical theory using these new methods and then to push forward in new directions. The most important is to extend these techniques to GL(n) L-functions, which we hope will now be feasible having found the correct framework in which to proceed. Since we rarely know how to analytically continue such L-functions this could be of great benefit to the subject.
We are developing the large sieve so that it can be used for individual moduli, and will determine a strong form of that. Also a new method to give asymptotics for mean values, when they are not too small.
We wish to incorporate techniques of analytic number theory into our theory, for example recent advances on mean values of Dirichlet polynomials. Also the recent breakthroughs on the sieve suggest strong links that need further exploration.
Additive combinatorics yields important results in many areas. There are strong analogies between its results, and those for multiplicative functions, especially in large value spectrum theory, and its applications. We hope to develop these further.
Much of this is joint work with K Soundararajan of Stanford University.
Max ERC Funding
2 011 742 €
Duration
Start date: 2015-08-01, End date: 2020-07-31
Project acronym 3D-E
Project 3D Engineered Environments for Regenerative Medicine
Researcher (PI) Ruth Elizabeth Cameron
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARSOF THE UNIVERSITY OF CAMBRIDGE
Country United Kingdom
Call Details Advanced Grant (AdG), PE8, ERC-2012-ADG_20120216
Summary "This proposal develops a unified, underpinning technology to create novel, complex and biomimetic 3D environments for the control of tissue growth. As director of Cambridge Centre for Medical Materials, I have recently been approached by medical colleagues to help to solve important problems in the separate therapeutic areas of breast cancer, cardiac disease and blood disorders. In each case, the solution lies in complex 3D engineered environments for cell culture. These colleagues make it clear that existing 3D scaffolds fail to provide the required complex orientational and spatial anisotropy, and are limited in their ability to impart appropriate biochemical and mechanical cues.
I have a strong track record in this area. A particular success has been the use of a freeze drying technology to make collagen based porous implants for the cartilage-bone interface in the knee, which has now been commercialised. The novelty of this proposal lies in the broadening of the established scientific base of this technology to enable biomacromolecular structures with:
(A) controlled and complex pore orientation to mimic many normal multi-oriented tissue structures
(B) compositional and positional control to match varying local biochemical environments,
(C) the attachment of novel peptides designed to control cell behaviour, and
(D) mechanical control at both a local and macroscopic level to provide mechanical cues for cells.
These will be complemented by the development of
(E) robust characterisation methodologies for the structures created.
These advances will then be employed in each of the medical areas above.
This approach is highly interdisciplinary. Existing working relationships with experts in each medical field will guarantee expertise and licensed facilities in the required biological disciplines. Funds for this proposal would therefore establish a rich hub of mutually beneficial research and opportunities for cross-disciplinary sharing of expertise."
Summary
"This proposal develops a unified, underpinning technology to create novel, complex and biomimetic 3D environments for the control of tissue growth. As director of Cambridge Centre for Medical Materials, I have recently been approached by medical colleagues to help to solve important problems in the separate therapeutic areas of breast cancer, cardiac disease and blood disorders. In each case, the solution lies in complex 3D engineered environments for cell culture. These colleagues make it clear that existing 3D scaffolds fail to provide the required complex orientational and spatial anisotropy, and are limited in their ability to impart appropriate biochemical and mechanical cues.
I have a strong track record in this area. A particular success has been the use of a freeze drying technology to make collagen based porous implants for the cartilage-bone interface in the knee, which has now been commercialised. The novelty of this proposal lies in the broadening of the established scientific base of this technology to enable biomacromolecular structures with:
(A) controlled and complex pore orientation to mimic many normal multi-oriented tissue structures
(B) compositional and positional control to match varying local biochemical environments,
(C) the attachment of novel peptides designed to control cell behaviour, and
(D) mechanical control at both a local and macroscopic level to provide mechanical cues for cells.
These will be complemented by the development of
(E) robust characterisation methodologies for the structures created.
These advances will then be employed in each of the medical areas above.
This approach is highly interdisciplinary. Existing working relationships with experts in each medical field will guarantee expertise and licensed facilities in the required biological disciplines. Funds for this proposal would therefore establish a rich hub of mutually beneficial research and opportunities for cross-disciplinary sharing of expertise."
Max ERC Funding
2 486 267 €
Duration
Start date: 2013-04-01, End date: 2018-03-31
Project acronym ALGAME
Project Algorithms, Games, Mechanisms, and the Price of Anarchy
Researcher (PI) Elias Koutsoupias
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Summary
The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Max ERC Funding
2 461 000 €
Duration
Start date: 2013-04-01, End date: 2019-03-31
Project acronym ARTIMATTER
Project "Lego-Style Materials, Structures and Devices Assembled on Demand from Isolated Atomic Planes"
Researcher (PI) Andre Geim
Host Institution (HI) THE UNIVERSITY OF MANCHESTER
Country United Kingdom
Call Details Advanced Grant (AdG), PE3, ERC-2012-ADG_20120216
Summary "Following the advent of graphene with its wide range of unique properties, several other one-atom-thick crystals have been isolated and their preliminary studies have been undertaken. They range from semiconducting monolayers of MoS2 and NbSe2, which similar to graphene exhibit the electric field effect and relatively high electronic quality, to wide-gap insulators such as boron-nitride monolayers that can serve as atomically-thin tunnel barriers.
This library of two-dimensional crystals opens a possibility to construct various 3D structures with on-demand properties, which do not exist in nature but can be assembled in Lego style by stacking individual atomic planes on top of each other in a desired sequence. This project is to explore this new avenue.
We will design, fabricate and study multilayer materials ranging from basic heterostructures that consist of a few alternating layers of graphene and boron nitride and already exhibit a rich spectrum of new phenomena, as recently demonstrated by the applicant’s group, to complex artificial materials containing many layers of different 2D crystals and mimicking, for example, layered superconductors. In a similar manner, various electronic, optoelectronic, micromechanical and other devices will be developed and investigated. The applicant’s aim is to search for new materials with unique properties, novel devices with better characteristics and new physics that is likely to emerge along the way.
The proposed research offers many exciting opportunities and can lead to the development of a large unexplored field with impact exceeding even that of graphene research. This presents a unique, once-in-decade, opportunity to make a very significant breakthrough in condensed matter physics and materials science."
Summary
"Following the advent of graphene with its wide range of unique properties, several other one-atom-thick crystals have been isolated and their preliminary studies have been undertaken. They range from semiconducting monolayers of MoS2 and NbSe2, which similar to graphene exhibit the electric field effect and relatively high electronic quality, to wide-gap insulators such as boron-nitride monolayers that can serve as atomically-thin tunnel barriers.
This library of two-dimensional crystals opens a possibility to construct various 3D structures with on-demand properties, which do not exist in nature but can be assembled in Lego style by stacking individual atomic planes on top of each other in a desired sequence. This project is to explore this new avenue.
We will design, fabricate and study multilayer materials ranging from basic heterostructures that consist of a few alternating layers of graphene and boron nitride and already exhibit a rich spectrum of new phenomena, as recently demonstrated by the applicant’s group, to complex artificial materials containing many layers of different 2D crystals and mimicking, for example, layered superconductors. In a similar manner, various electronic, optoelectronic, micromechanical and other devices will be developed and investigated. The applicant’s aim is to search for new materials with unique properties, novel devices with better characteristics and new physics that is likely to emerge along the way.
The proposed research offers many exciting opportunities and can lead to the development of a large unexplored field with impact exceeding even that of graphene research. This presents a unique, once-in-decade, opportunity to make a very significant breakthrough in condensed matter physics and materials science."
Max ERC Funding
2 200 000 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym ASAP
Project Adaptive Security and Privacy
Researcher (PI) Bashar Nuseibeh
Host Institution (HI) THE OPEN UNIVERSITY
Country United Kingdom
Call Details Advanced Grant (AdG), PE6, ERC-2011-ADG_20110209
Summary With the prevalence of mobile computing devices and the increasing availability of pervasive services, ubiquitous computing (Ubicomp) is a reality for many people. This reality is generating opportunities for people to interact socially in new and richer ways, and to work more effectively in a variety of new environments. More generally, Ubicomp infrastructures – controlled by software – will determine users’ access to critical services.
With these opportunities come higher risks of misuse by malicious agents. Therefore, the role and design of software for managing use and protecting against misuse is critical, and the engineering of software that is both functionally effective while safe guarding user assets from harm is a key challenge. Indeed the very nature of Ubicomp means that software must adapt to the changing needs of users and their environment, and, more critically, to the different threats to users’ security and privacy.
ASAP proposes to radically re-conceptualise software engineering for Ubicomp in ways that are cognisant of the changing functional needs of users, of the changing threats to user assets, and of the changing relationships between them. We propose to deliver adaptive software capabilities for supporting users in managing their privacy requirements, and adaptive software capabilities to deliver secure software that underpin those requirements. A key novelty of our approach is its holistic treatment of security and human behaviour. To achieve this, it draws upon contributions from requirements engineering, security & privacy engineering, and human-computer interaction. Our aim is to contribute to software engineering that empowers and protects Ubicomp users. Underpinning our approach will be the development of representations of security and privacy problem structures that capture user requirements, the context in which those requirements arise, and the adaptive software that aims to meet those requirements.
Summary
With the prevalence of mobile computing devices and the increasing availability of pervasive services, ubiquitous computing (Ubicomp) is a reality for many people. This reality is generating opportunities for people to interact socially in new and richer ways, and to work more effectively in a variety of new environments. More generally, Ubicomp infrastructures – controlled by software – will determine users’ access to critical services.
With these opportunities come higher risks of misuse by malicious agents. Therefore, the role and design of software for managing use and protecting against misuse is critical, and the engineering of software that is both functionally effective while safe guarding user assets from harm is a key challenge. Indeed the very nature of Ubicomp means that software must adapt to the changing needs of users and their environment, and, more critically, to the different threats to users’ security and privacy.
ASAP proposes to radically re-conceptualise software engineering for Ubicomp in ways that are cognisant of the changing functional needs of users, of the changing threats to user assets, and of the changing relationships between them. We propose to deliver adaptive software capabilities for supporting users in managing their privacy requirements, and adaptive software capabilities to deliver secure software that underpin those requirements. A key novelty of our approach is its holistic treatment of security and human behaviour. To achieve this, it draws upon contributions from requirements engineering, security & privacy engineering, and human-computer interaction. Our aim is to contribute to software engineering that empowers and protects Ubicomp users. Underpinning our approach will be the development of representations of security and privacy problem structures that capture user requirements, the context in which those requirements arise, and the adaptive software that aims to meet those requirements.
Max ERC Funding
2 499 041 €
Duration
Start date: 2012-10-01, End date: 2018-09-30
Project acronym BIMPC
Project Biologically-Inspired Massively-Parallel Computation
Researcher (PI) Stephen Byram Furber
Host Institution (HI) THE UNIVERSITY OF MANCHESTER
Country United Kingdom
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary "We aim to establish a world-leading research capability in Europe for advancing novel models of asynchronous computation based upon principles inspired by brain function. This work will accelerate progress towards an understanding of how the potential of brain-inspired many-core architectures may be harnessed. The results will include new brain-inspired models of asynchronous computation and new brain- inspired approaches to fault-tolerance and reliability in complex computer systems.
Many-core processors are now established as the way forward for computing from embedded systems to supercomputers. An emerging problem with leading-edge silicon technology is a reduction in the yield and reliability of modern processors due to high variability in the manufacture of the components and interconnect as transistor geometries shrink towards atomic scales. We are faced with the longstanding problem of how to make use of a potentially large array of parallel processors, but with the new constraint that the individual elements are the system are inherently unreliable.
The human brain remains as one of the great frontiers of science – how does this organ upon which we all depend so critically actually do its job? A great deal is known about the underlying technology – the neuron – and we can observe large-scale brain activity through techniques such as magnetic resonance imaging, but this knowledge barely starts to tell us how the brain works. Something is happening at the intermediate levels of processing that we have yet to begin to understand, but the essence of the brain's massively-parallel information processing capabilities and robustness to component failure lies in these intermediate levels.
These two issues draws us towards two high-level research questions:
• Can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computing?
• Can massively parallel computing resources accelerate our understanding of brain function"
Summary
"We aim to establish a world-leading research capability in Europe for advancing novel models of asynchronous computation based upon principles inspired by brain function. This work will accelerate progress towards an understanding of how the potential of brain-inspired many-core architectures may be harnessed. The results will include new brain-inspired models of asynchronous computation and new brain- inspired approaches to fault-tolerance and reliability in complex computer systems.
Many-core processors are now established as the way forward for computing from embedded systems to supercomputers. An emerging problem with leading-edge silicon technology is a reduction in the yield and reliability of modern processors due to high variability in the manufacture of the components and interconnect as transistor geometries shrink towards atomic scales. We are faced with the longstanding problem of how to make use of a potentially large array of parallel processors, but with the new constraint that the individual elements are the system are inherently unreliable.
The human brain remains as one of the great frontiers of science – how does this organ upon which we all depend so critically actually do its job? A great deal is known about the underlying technology – the neuron – and we can observe large-scale brain activity through techniques such as magnetic resonance imaging, but this knowledge barely starts to tell us how the brain works. Something is happening at the intermediate levels of processing that we have yet to begin to understand, but the essence of the brain's massively-parallel information processing capabilities and robustness to component failure lies in these intermediate levels.
These two issues draws us towards two high-level research questions:
• Can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computing?
• Can massively parallel computing resources accelerate our understanding of brain function"
Max ERC Funding
2 399 761 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym CANBUILD
Project Building a Human Tumour Microenvironment
Researcher (PI) Frances Rosemary Balkwill
Host Institution (HI) QUEEN MARY UNIVERSITY OF LONDON
Country United Kingdom
Call Details Advanced Grant (AdG), LS4, ERC-2012-ADG_20120314
Summary Even at their earliest stages, human cancers are more than just cells with malignant potential. Cells and extracellular matrix components that normally support and protect the body are coerced into a tumour microenvironment that is central to disease progression. My hypothesis is that recent advances in tissue engineering, biomechanics and stem cell biology make it possible to engineer, for the first time, a complex 3D human tumour microenvironment in which individual cell lineages of malignant, haemopoietic and mesenchymal origin will communicate, evolve and grow in vitro. The ultimate aim is to build this cancerous tissue with autologous cells: there is an urgent need for models in which we can study the interaction of human immune cells with malignant cells from the same individual in an appropriate 3D biomechanical microenvironment.
To achieve the objectives of the CANBUILD project, I have assembled a multi-disciplinary team of collaborators with international standing in tumour microenvironment research, cancer treatment, tissue engineering, mechanobiology, stem cell research and 3D computer-assisted imaging.
The goal is to recreate the microenvironment of high-grade serous ovarian cancer metastases in the omentum. This is a major clinical problem, my lab has extensive knowledge of this microenvironment and we have already established simple 3D models of these metastases.
The research plan involves:
Deconstruction of this specific tumour microenvironment
Construction of artificial scaffold, optimising growth of cell lineages, assembly of the model
Comparison to fresh tissue
Investigating the role of individual cell lineages
Testing therapies that target the tumour microenvironment
My vision is that this project will revolutionise the practice of human malignant cell research, replacing misleading systems based on cancer cell monoculture on plastic surfaces and allowing us to better test new treatments that target the human tumour microenvironment.
Summary
Even at their earliest stages, human cancers are more than just cells with malignant potential. Cells and extracellular matrix components that normally support and protect the body are coerced into a tumour microenvironment that is central to disease progression. My hypothesis is that recent advances in tissue engineering, biomechanics and stem cell biology make it possible to engineer, for the first time, a complex 3D human tumour microenvironment in which individual cell lineages of malignant, haemopoietic and mesenchymal origin will communicate, evolve and grow in vitro. The ultimate aim is to build this cancerous tissue with autologous cells: there is an urgent need for models in which we can study the interaction of human immune cells with malignant cells from the same individual in an appropriate 3D biomechanical microenvironment.
To achieve the objectives of the CANBUILD project, I have assembled a multi-disciplinary team of collaborators with international standing in tumour microenvironment research, cancer treatment, tissue engineering, mechanobiology, stem cell research and 3D computer-assisted imaging.
The goal is to recreate the microenvironment of high-grade serous ovarian cancer metastases in the omentum. This is a major clinical problem, my lab has extensive knowledge of this microenvironment and we have already established simple 3D models of these metastases.
The research plan involves:
Deconstruction of this specific tumour microenvironment
Construction of artificial scaffold, optimising growth of cell lineages, assembly of the model
Comparison to fresh tissue
Investigating the role of individual cell lineages
Testing therapies that target the tumour microenvironment
My vision is that this project will revolutionise the practice of human malignant cell research, replacing misleading systems based on cancer cell monoculture on plastic surfaces and allowing us to better test new treatments that target the human tumour microenvironment.
Max ERC Funding
2 431 035 €
Duration
Start date: 2013-06-01, End date: 2018-05-31
Project acronym CARDIOEPIGEN
Project Epigenetics and microRNAs in Myocardial Function and Disease
Researcher (PI) Gianluigi Condorelli
Host Institution (HI) HUMANITAS MIRASOLE SPA
Country Italy
Call Details Advanced Grant (AdG), LS4, ERC-2011-ADG_20110310
Summary Heart failure (HF) is the ultimate outcome of many cardiovascular diseases. Re-expression of fetal genes in the adult heart contributes to development of HF. Two mechanisms involved in the control of gene expression are epigenetics and microRNAs (miRs). We propose a project on epigenetic and miR-mediated mechanisms leading to HF.
Epigenetics refers to heritable modification of DNA and histones that does not modify the genetic code. Depending on the type of modification and on the site affected, these chemical changes up- or down-regulate transcription of specific genes. Despite it being a major player in gene regulation, epigenetics has been only partly investigated in HF. miRs are regulatory RNAs that target mRNAs for inhibition. Dysregulation of the cardiac miR signature occurs in HF. miR expression may itself be under epigenetic control, constituting a miR-epigenetic regulatory network. To our knowledge, this possibility has not been studied yet.
Our specific hypothesis is that the profile of DNA/histone methylation and the cross-talk between epigenetic enzymes and miRs have fundamental roles in defining the characteristics of cells during cardiac development and that the dysregulation of these processes determines the deleterious nature of the stressed heart’s gene programme. We will test this first through a genome-wide study of DNA/histone methylation to generate maps of the main methylation modifications occurring in the genome of cardiac cells treated with a pro-hypertrophy regulator and of a HF model. We will then investigate the role of epigenetic enzymes deemed important in HF, through the generation and study of knockout mice models. Finally, we will test the possible therapeutic potential of modulating epigenetic genes.
We hope to further understand the pathological mechanisms leading to HF and to generate data instrumental to the development of diagnostic and therapeutic strategies for this disease.
Summary
Heart failure (HF) is the ultimate outcome of many cardiovascular diseases. Re-expression of fetal genes in the adult heart contributes to development of HF. Two mechanisms involved in the control of gene expression are epigenetics and microRNAs (miRs). We propose a project on epigenetic and miR-mediated mechanisms leading to HF.
Epigenetics refers to heritable modification of DNA and histones that does not modify the genetic code. Depending on the type of modification and on the site affected, these chemical changes up- or down-regulate transcription of specific genes. Despite it being a major player in gene regulation, epigenetics has been only partly investigated in HF. miRs are regulatory RNAs that target mRNAs for inhibition. Dysregulation of the cardiac miR signature occurs in HF. miR expression may itself be under epigenetic control, constituting a miR-epigenetic regulatory network. To our knowledge, this possibility has not been studied yet.
Our specific hypothesis is that the profile of DNA/histone methylation and the cross-talk between epigenetic enzymes and miRs have fundamental roles in defining the characteristics of cells during cardiac development and that the dysregulation of these processes determines the deleterious nature of the stressed heart’s gene programme. We will test this first through a genome-wide study of DNA/histone methylation to generate maps of the main methylation modifications occurring in the genome of cardiac cells treated with a pro-hypertrophy regulator and of a HF model. We will then investigate the role of epigenetic enzymes deemed important in HF, through the generation and study of knockout mice models. Finally, we will test the possible therapeutic potential of modulating epigenetic genes.
We hope to further understand the pathological mechanisms leading to HF and to generate data instrumental to the development of diagnostic and therapeutic strategies for this disease.
Max ERC Funding
2 500 000 €
Duration
Start date: 2012-10-01, End date: 2018-09-30
Project acronym CME
Project Concurrency Made Easy
Researcher (PI) Bertrand Philippe Meyer
Host Institution (HI) POLITECNICO DI MILANO
Country Italy
Call Details Advanced Grant (AdG), PE6, ERC-2011-ADG_20110209
Summary The “Concurrency Made Easy” project is an attempt to achieve a conceptual breakthrough on the most daunting challenge in information technology today: mastering concurrency. Concurrency, once a specialized technique for experts, is forcing itself onto the entire IT community because of a disruptive phenomenon: the “end of Moore’s law as we know it”. Increases in performance can no longer happen through raw hardware speed, but only through concurrency, as in multicore architectures. Concurrency is also critical for networking, cloud computing and the progress of natural sciences. Software support for these advances lags, mired in concepts from the 1960s such as semaphores. Existing formal models are hard to apply in practice. Incremental progress is not sufficient; neither are techniques that place the burden on programmers, who cannot all be expected to become concurrency experts. The CME project attempts a major shift on the side of the supporting technology: languages, formal models, verification techniques. The core idea of the CME project is to make concurrency easy for programmers, by building on established ideas of modern programming methodology (object technology, Design by Contract) shifting the concurrency difficulties to the internals of the model and implementation.
The project includes the following elements.
1. Sound conceptual model for concurrency. The starting point is the influential previous work of the PI: concepts of object-oriented design, particularly Design by Contract, and the SCOOP concurrency model.
2. Reference implementation, integrated into an IDE.
3. Performance analysis.
4. Theory and formal basis, including full semantics.
5. Proof techniques, compatible with proof techniques for the sequential part.
6. Complementary verification techniques such as concurrent testing.
7. Library of concurrency components and examples.
8. Publication, including a major textbook on concurrency.
Summary
The “Concurrency Made Easy” project is an attempt to achieve a conceptual breakthrough on the most daunting challenge in information technology today: mastering concurrency. Concurrency, once a specialized technique for experts, is forcing itself onto the entire IT community because of a disruptive phenomenon: the “end of Moore’s law as we know it”. Increases in performance can no longer happen through raw hardware speed, but only through concurrency, as in multicore architectures. Concurrency is also critical for networking, cloud computing and the progress of natural sciences. Software support for these advances lags, mired in concepts from the 1960s such as semaphores. Existing formal models are hard to apply in practice. Incremental progress is not sufficient; neither are techniques that place the burden on programmers, who cannot all be expected to become concurrency experts. The CME project attempts a major shift on the side of the supporting technology: languages, formal models, verification techniques. The core idea of the CME project is to make concurrency easy for programmers, by building on established ideas of modern programming methodology (object technology, Design by Contract) shifting the concurrency difficulties to the internals of the model and implementation.
The project includes the following elements.
1. Sound conceptual model for concurrency. The starting point is the influential previous work of the PI: concepts of object-oriented design, particularly Design by Contract, and the SCOOP concurrency model.
2. Reference implementation, integrated into an IDE.
3. Performance analysis.
4. Theory and formal basis, including full semantics.
5. Proof techniques, compatible with proof techniques for the sequential part.
6. Complementary verification techniques such as concurrent testing.
7. Library of concurrency components and examples.
8. Publication, including a major textbook on concurrency.
Max ERC Funding
2 482 957 €
Duration
Start date: 2012-04-01, End date: 2018-09-30
Project acronym COIMBRA
Project Combinatorial methods in noncommutative ring theory
Researcher (PI) Agata Smoktunowicz
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Country United Kingdom
Call Details Advanced Grant (AdG), PE1, ERC-2012-ADG_20120216
Summary As noted by T Y Lam in his book, A first course in noncommutative rings, noncommutative ring theory is a fertile meeting ground for group theory (group rings), representation theory (modules), functional analysis (operator algebras), Lie theory (enveloping algebras), algebraic geometry (finitely generated algebras, differential operators), noncommutative algebraic geometry (graded domains), arithmetic (orders, Brauer groups), universal algebra (co-homology of rings, projective modules) and quantum physics (quantum matrices). As such, noncommutative ring theory is an area which has the potential to produce developments in many areas and in an efficient manner. The main aim of the project is to develop methods which could be applicable not only in ring theory but also in other areas, and then apply them to solve several important open questions in mathematics. The Principal Investigator, along with two PhD students and two post doctorates, propose to: study basic open questions on infinite dimensional associative noncommutative algebras; pool their expertise so as to tackle problems from a number of related areas of mathematics using noncommutative ring theory, and develop new approaches to existing problems that will benefit future researchers. A part of our methodology would be to first improve (in some cases) Bergman's Diamond Lemma, and then apply it to several open problems. The Diamond Lemma gives bases for the algebras defined by given sets of relations. In general, it is very difficult to determine if the algebra given by a concrete set of relations is non-trivial or infinite dimensional. Our approach is to introduce smaller rings, which we will call platinum rings. The next step would then be to apply the Diamond Lemma to the platinum ring instead of the original rings. Such results would have many applications in group theory, noncommutative projective geometry, nonassociative algebras and no doubt other areas as well.
Summary
As noted by T Y Lam in his book, A first course in noncommutative rings, noncommutative ring theory is a fertile meeting ground for group theory (group rings), representation theory (modules), functional analysis (operator algebras), Lie theory (enveloping algebras), algebraic geometry (finitely generated algebras, differential operators), noncommutative algebraic geometry (graded domains), arithmetic (orders, Brauer groups), universal algebra (co-homology of rings, projective modules) and quantum physics (quantum matrices). As such, noncommutative ring theory is an area which has the potential to produce developments in many areas and in an efficient manner. The main aim of the project is to develop methods which could be applicable not only in ring theory but also in other areas, and then apply them to solve several important open questions in mathematics. The Principal Investigator, along with two PhD students and two post doctorates, propose to: study basic open questions on infinite dimensional associative noncommutative algebras; pool their expertise so as to tackle problems from a number of related areas of mathematics using noncommutative ring theory, and develop new approaches to existing problems that will benefit future researchers. A part of our methodology would be to first improve (in some cases) Bergman's Diamond Lemma, and then apply it to several open problems. The Diamond Lemma gives bases for the algebras defined by given sets of relations. In general, it is very difficult to determine if the algebra given by a concrete set of relations is non-trivial or infinite dimensional. Our approach is to introduce smaller rings, which we will call platinum rings. The next step would then be to apply the Diamond Lemma to the platinum ring instead of the original rings. Such results would have many applications in group theory, noncommutative projective geometry, nonassociative algebras and no doubt other areas as well.
Max ERC Funding
1 406 551 €
Duration
Start date: 2013-06-01, End date: 2018-05-31