Project acronym ACrossWire
Project A Cross-Correlated Approach to Engineering Nitride Nanowires
Researcher (PI) Hannah Jane JOYCE
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Starting Grant (StG), PE7, ERC-2016-STG
Summary Nanowires based on group III–nitride semiconductors exhibit outstanding potential for emerging applications in energy-efficient lighting, optoelectronics and solar energy harvesting. Nitride nanowires, tailored at the nanoscale, should overcome many of the challenges facing conventional planar nitride materials, and also add extraordinary new functionality to these materials. However, progress towards III–nitride nanowire devices has been hampered by the challenges in quantifying nanowire electrical properties using conventional contact-based measurements. Without reliable electrical transport data, it is extremely difficult to optimise nanowire growth and device design. This project aims to overcome this problem through an unconventional approach: advanced contact-free electrical measurements. Contact-free measurements, growth studies, and device studies will be cross-correlated to provide unprecedented insight into the growth mechanisms that govern nanowire electronic properties and ultimately dictate device performance. A key contact-free technique at the heart of this proposal is ultrafast terahertz conductivity spectroscopy: an advanced technique ideal for probing nanowire electrical properties. We will develop new methods to enable the full suite of contact-free (including terahertz, photoluminescence and cathodoluminescence measurements) and contact-based measurements to be performed with high spatial resolution on the same nanowires. This will provide accurate, comprehensive and cross-correlated feedback to guide growth studies and expedite the targeted development of nanowires with specified functionality. We will apply this powerful approach to tailor nanowires as photoelectrodes for solar photoelectrochemical water splitting. This is an application for which nitride nanowires have outstanding, yet unfulfilled, potential. This project will thus harness the true potential of nitride nanowires and bring them to the forefront of 21st century technology.
Summary
Nanowires based on group III–nitride semiconductors exhibit outstanding potential for emerging applications in energy-efficient lighting, optoelectronics and solar energy harvesting. Nitride nanowires, tailored at the nanoscale, should overcome many of the challenges facing conventional planar nitride materials, and also add extraordinary new functionality to these materials. However, progress towards III–nitride nanowire devices has been hampered by the challenges in quantifying nanowire electrical properties using conventional contact-based measurements. Without reliable electrical transport data, it is extremely difficult to optimise nanowire growth and device design. This project aims to overcome this problem through an unconventional approach: advanced contact-free electrical measurements. Contact-free measurements, growth studies, and device studies will be cross-correlated to provide unprecedented insight into the growth mechanisms that govern nanowire electronic properties and ultimately dictate device performance. A key contact-free technique at the heart of this proposal is ultrafast terahertz conductivity spectroscopy: an advanced technique ideal for probing nanowire electrical properties. We will develop new methods to enable the full suite of contact-free (including terahertz, photoluminescence and cathodoluminescence measurements) and contact-based measurements to be performed with high spatial resolution on the same nanowires. This will provide accurate, comprehensive and cross-correlated feedback to guide growth studies and expedite the targeted development of nanowires with specified functionality. We will apply this powerful approach to tailor nanowires as photoelectrodes for solar photoelectrochemical water splitting. This is an application for which nitride nanowires have outstanding, yet unfulfilled, potential. This project will thus harness the true potential of nitride nanowires and bring them to the forefront of 21st century technology.
Max ERC Funding
1 499 195 €
Duration
Start date: 2017-04-01, End date: 2022-03-31
Project acronym ALEXANDRIA
Project Large-Scale Formal Proof for the Working Mathematician
Researcher (PI) Lawrence PAULSON
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary Mathematical proofs have always been prone to error. Today, proofs can be hundreds of pages long and combine results from many specialisms, making them almost impossible to check. One solution is to deploy modern verification technology. Interactive theorem provers have demonstrated their potential as vehicles for formalising mathematics through achievements such as the verification of the Kepler Conjecture. Proofs done using such tools reach a high standard of correctness.
However, existing theorem provers are unsuitable for mathematics. Their formal proofs are unreadable. They struggle to do simple tasks, such as evaluating limits. They lack much basic mathematics, and the material they do have is difficult to locate and apply.
ALEXANDRIA will create a proof development environment attractive to working mathematicians, utilising the best technology available across computer science. Its focus will be the management and use of large-scale mathematical knowledge, both theorems and algorithms. The project will employ mathematicians to investigate the formalisation of mathematics in practice. Our already substantial formalised libraries will serve as the starting point. They will be extended and annotated to support sophisticated searches. Techniques will be borrowed from machine learning, information retrieval and natural language processing. Algorithms will be treated similarly: ALEXANDRIA will help users find and invoke the proof methods and algorithms appropriate for the task.
ALEXANDRIA will provide (1) comprehensive formal mathematical libraries; (2) search within libraries, and the mining of libraries for proof patterns; (3) automated support for the construction of large formal proofs; (4) sound and practical computer algebra tools.
ALEXANDRIA will be based on legible structured proofs. Formal proofs should be not mere code, but a machine-checkable form of communication between mathematicians.
Summary
Mathematical proofs have always been prone to error. Today, proofs can be hundreds of pages long and combine results from many specialisms, making them almost impossible to check. One solution is to deploy modern verification technology. Interactive theorem provers have demonstrated their potential as vehicles for formalising mathematics through achievements such as the verification of the Kepler Conjecture. Proofs done using such tools reach a high standard of correctness.
However, existing theorem provers are unsuitable for mathematics. Their formal proofs are unreadable. They struggle to do simple tasks, such as evaluating limits. They lack much basic mathematics, and the material they do have is difficult to locate and apply.
ALEXANDRIA will create a proof development environment attractive to working mathematicians, utilising the best technology available across computer science. Its focus will be the management and use of large-scale mathematical knowledge, both theorems and algorithms. The project will employ mathematicians to investigate the formalisation of mathematics in practice. Our already substantial formalised libraries will serve as the starting point. They will be extended and annotated to support sophisticated searches. Techniques will be borrowed from machine learning, information retrieval and natural language processing. Algorithms will be treated similarly: ALEXANDRIA will help users find and invoke the proof methods and algorithms appropriate for the task.
ALEXANDRIA will provide (1) comprehensive formal mathematical libraries; (2) search within libraries, and the mining of libraries for proof patterns; (3) automated support for the construction of large formal proofs; (4) sound and practical computer algebra tools.
ALEXANDRIA will be based on legible structured proofs. Formal proofs should be not mere code, but a machine-checkable form of communication between mathematicians.
Max ERC Funding
2 430 140 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym CASCAde
Project Confidentiality-preserving Security Assurance
Researcher (PI) Thomas GROSS
Host Institution (HI) UNIVERSITY OF NEWCASTLE UPON TYNE
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary "This proposal aims to create a new generation of security assurance. It investigates whether one can certify an inter-connected dynamically changing system in such a way that one can prove its security properties without disclosing sensitive information about the system's blueprint.
This has several compelling advantages. First, the security of large-scale dynamically changing systems will be significantly improved. Second, we can prove properties of topologies, hosts and users who participate in transactions in one go, while keeping sensitive information confidential. Third, we can prove the integrity of graph data structures to others, while maintaining their their confidentiality. This will benefit EU governments and citizens through the increased security of critical systems.
The proposal pursues the main research hypothesis that usable confidentiality-preserving security assurance will trigger a paradigm shift in security and dependability. It will pursue this objective by the creation of new cryptographic techniques to certify and prove properties of graph data structures. A preliminary investigation in 2015 showed that graph signature schemes are indeed feasible. The essence of this solution can be traced back to my earlier research on highly efficient attribute encodings for anonymous credential schemes in 2008.
However, the invention of graph signature schemes only clears one obstacle in a long journey to create a new generation of security assurance systems. There are still many complex obstacles, first and foremost, assuring ""soundness"" in the sense that integrity proofs a verifier accepts translate to the state of the system at that time. The work program involves six WPs: 1) to develop graph signatures and new cryptographic primitives; 2) to establish cross-system soundness; 3) to handle scale and change; 4) to establish human trust and usability; 5) to create new architectures; and 6) to test prototypes in practice."
Summary
"This proposal aims to create a new generation of security assurance. It investigates whether one can certify an inter-connected dynamically changing system in such a way that one can prove its security properties without disclosing sensitive information about the system's blueprint.
This has several compelling advantages. First, the security of large-scale dynamically changing systems will be significantly improved. Second, we can prove properties of topologies, hosts and users who participate in transactions in one go, while keeping sensitive information confidential. Third, we can prove the integrity of graph data structures to others, while maintaining their their confidentiality. This will benefit EU governments and citizens through the increased security of critical systems.
The proposal pursues the main research hypothesis that usable confidentiality-preserving security assurance will trigger a paradigm shift in security and dependability. It will pursue this objective by the creation of new cryptographic techniques to certify and prove properties of graph data structures. A preliminary investigation in 2015 showed that graph signature schemes are indeed feasible. The essence of this solution can be traced back to my earlier research on highly efficient attribute encodings for anonymous credential schemes in 2008.
However, the invention of graph signature schemes only clears one obstacle in a long journey to create a new generation of security assurance systems. There are still many complex obstacles, first and foremost, assuring ""soundness"" in the sense that integrity proofs a verifier accepts translate to the state of the system at that time. The work program involves six WPs: 1) to develop graph signatures and new cryptographic primitives; 2) to establish cross-system soundness; 3) to handle scale and change; 4) to establish human trust and usability; 5) to create new architectures; and 6) to test prototypes in practice."
Max ERC Funding
1 485 643 €
Duration
Start date: 2017-11-01, End date: 2022-10-31
Project acronym Cytokine Signalosome
Project Mapping Cytokine Signalling Networks using Engineered Surrogate Ligands
Researcher (PI) Ignacio Moraga Gonzalez
Host Institution (HI) UNIVERSITY OF DUNDEE
Call Details Starting Grant (StG), LS6, ERC-2016-STG
Summary Cells use an intricate network of intracellular signalling molecules to translate environmental changes, sensed via surface receptors, into cellular responses. Despite their prominent role in regulating every aspect of life, we lack a comprehensive understanding of how signalling networks convey extracellular information into specific bioactivities and fate decisions. To rationally manipulate cell fate, which could fundamentally change the way that we treat human diseases, first we need a systematic understanding of how signalling is initiated and propagated inside the cell. I discovered that specificity of cytokine receptor signalling not only depends on cellular determinants such as receptor density and endocytic trafficking, but can be systematically altered by modulating ligand binding parameters and receptor binding geometries. A fundamentally novel approach combining high-throughput flow cytometry and QMS with engineered cytokine surrogate ligands able to fine-tune signalling responses will generate detailed maps of the signalling networks engaged by cytokines in time and space to unveil the mechanistic basis that allow a receptor to trigger different signal activation programs and bioactivities in response to different ligands. By quantitatively characterizing the signalling programs activated by ligands, using state-of-the-art biochemical, biophysical, structural, genetic and fluorescence imaging techniques, I plan to identify events critical for cellular decisions. By fully characterizing the intracellular signalling network hard-wired inside a cell and understanding its dynamic in response to environmental changes will we be able to comprehend and manipulate the enormous functional plasticity exhibited by cells. TInsights generated will open new fields of investigation where engineered ligands prove indispensable to understand complex biological responses and greatly advance our understanding of cytokine biology and human immunology in health and disease.
Summary
Cells use an intricate network of intracellular signalling molecules to translate environmental changes, sensed via surface receptors, into cellular responses. Despite their prominent role in regulating every aspect of life, we lack a comprehensive understanding of how signalling networks convey extracellular information into specific bioactivities and fate decisions. To rationally manipulate cell fate, which could fundamentally change the way that we treat human diseases, first we need a systematic understanding of how signalling is initiated and propagated inside the cell. I discovered that specificity of cytokine receptor signalling not only depends on cellular determinants such as receptor density and endocytic trafficking, but can be systematically altered by modulating ligand binding parameters and receptor binding geometries. A fundamentally novel approach combining high-throughput flow cytometry and QMS with engineered cytokine surrogate ligands able to fine-tune signalling responses will generate detailed maps of the signalling networks engaged by cytokines in time and space to unveil the mechanistic basis that allow a receptor to trigger different signal activation programs and bioactivities in response to different ligands. By quantitatively characterizing the signalling programs activated by ligands, using state-of-the-art biochemical, biophysical, structural, genetic and fluorescence imaging techniques, I plan to identify events critical for cellular decisions. By fully characterizing the intracellular signalling network hard-wired inside a cell and understanding its dynamic in response to environmental changes will we be able to comprehend and manipulate the enormous functional plasticity exhibited by cells. TInsights generated will open new fields of investigation where engineered ligands prove indispensable to understand complex biological responses and greatly advance our understanding of cytokine biology and human immunology in health and disease.
Max ERC Funding
1 687 500 €
Duration
Start date: 2017-04-01, End date: 2022-03-31
Project acronym DIADEM
Project Domain-centric Intelligent Automated Data Extraction Methodology
Researcher (PI) Georg Gottlob
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2009-AdG
Summary This proposal is in the area of automated web data extraction and web data management. The aim of our project is to provide the logical, methodological, and algorithmic foundations for the knowledge-based extraction of structured data from web sites belonging to specific domains, such as estate agents, restaurants, travel agencies, car dealers, and so on. One core part of this will be a comprehensive multi-dimensional logical data model that will be used to simultaneously represent both the content of a large website, its structure, inferred user-interaction patterns and all meta-information and knowledge (factual and rule-based) that is necessary to automatically perform the desired extraction tasks. I envision that, based on these new foundations, we will be able to build extremely powerful systems that autonomously explore websites of a given domain, understand their structure and extract and output richly structured data in formats such as XML or RDF. We aim at systems that take as input a URL of a website in a given domain, automatically explore this site and deliver as output a structured data set containing all the relevant information present on that site. As an example, imagine a system specialized in the real-estate domain, that receives as input the URL of any real-estate agent, explores the site automatically and outputs richly structured records of all properties that are currently advertised for sale or for rent on the many web pages of this site. We plan to develop and implement at least two such systems for two different domains, including the one mentioned. The breakthrough in automatic data extraction that we are striving for would enable a quantum leap for two interrelated technologies which are the hottest next topics in web search: vertical search, that is, web search in specialized domains, and object search, that is, the search for web data objects rather than web pages.
Summary
This proposal is in the area of automated web data extraction and web data management. The aim of our project is to provide the logical, methodological, and algorithmic foundations for the knowledge-based extraction of structured data from web sites belonging to specific domains, such as estate agents, restaurants, travel agencies, car dealers, and so on. One core part of this will be a comprehensive multi-dimensional logical data model that will be used to simultaneously represent both the content of a large website, its structure, inferred user-interaction patterns and all meta-information and knowledge (factual and rule-based) that is necessary to automatically perform the desired extraction tasks. I envision that, based on these new foundations, we will be able to build extremely powerful systems that autonomously explore websites of a given domain, understand their structure and extract and output richly structured data in formats such as XML or RDF. We aim at systems that take as input a URL of a website in a given domain, automatically explore this site and deliver as output a structured data set containing all the relevant information present on that site. As an example, imagine a system specialized in the real-estate domain, that receives as input the URL of any real-estate agent, explores the site automatically and outputs richly structured records of all properties that are currently advertised for sale or for rent on the many web pages of this site. We plan to develop and implement at least two such systems for two different domains, including the one mentioned. The breakthrough in automatic data extraction that we are striving for would enable a quantum leap for two interrelated technologies which are the hottest next topics in web search: vertical search, that is, web search in specialized domains, and object search, that is, the search for web data objects rather than web pages.
Max ERC Funding
2 402 846 €
Duration
Start date: 2010-04-01, End date: 2015-03-31
Project acronym EPIC
Project Evolving Program Improvement Collaborators
Researcher (PI) Mark HARMAN
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary EPIC will automatically construct Evolutionary Program Improvement Collaborators (called Epi-Collaborators) that suggest code changes that improve software according to multiple functional and non-functional objectives. The Epi-Collaborator suggestions will include transplantation of code from a donor system to a host, grafting of entirely new features `grown' (evolved) by the Epi-Collaborator, and identification and optimisation of tuneable `deep' parameters (that were previously unexposed and therefore unexploited).
A key feature of the EPIC approach is that all of these suggestions will be underpinned by automatically-constructed quantitative evidence that justifies, explains and documents improvements. EPIC aims to introduce a new way of developing software, as a collaboration between human and machine, exploiting the complementary strengths of each; the human has domain and contextual insights, while the machine has the ability to intelligently search large search spaces. The EPIC approach directly tackles the emergent challenges of multiplicity: optimising for multiple competing and conflicting objectives and platforms with multiple software versions.
Keywords:
Search Based Software Engineering (SBSE),
Evolutionary Computing,
Software Testing,
Genetic Algorithms,
Genetic Programming.
Summary
EPIC will automatically construct Evolutionary Program Improvement Collaborators (called Epi-Collaborators) that suggest code changes that improve software according to multiple functional and non-functional objectives. The Epi-Collaborator suggestions will include transplantation of code from a donor system to a host, grafting of entirely new features `grown' (evolved) by the Epi-Collaborator, and identification and optimisation of tuneable `deep' parameters (that were previously unexposed and therefore unexploited).
A key feature of the EPIC approach is that all of these suggestions will be underpinned by automatically-constructed quantitative evidence that justifies, explains and documents improvements. EPIC aims to introduce a new way of developing software, as a collaboration between human and machine, exploiting the complementary strengths of each; the human has domain and contextual insights, while the machine has the ability to intelligently search large search spaces. The EPIC approach directly tackles the emergent challenges of multiplicity: optimising for multiple competing and conflicting objectives and platforms with multiple software versions.
Keywords:
Search Based Software Engineering (SBSE),
Evolutionary Computing,
Software Testing,
Genetic Algorithms,
Genetic Programming.
Max ERC Funding
2 159 035 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym EyeCode
Project Perceptual encoding of high fidelity light fields
Researcher (PI) Rafal Konrad MANTIUK
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary One of the grand challenges of computer graphics has been to generate images indistinguishable from photographs for a naïve observer. As this challenge is mostly completed and computer generated imagery starts to replace photographs (product catalogues, special effects in cinema), the next grand challenge is to produce imagery that is indistinguishable from the real-world.
Tremendous progress in capture, manipulation and display technologies opens the potential to achieve this new challenge (at the research stage) in the next 5-10 years. Electronic displays offer sufficient resolution, frame rate, dynamic range, colour gamut and, in some configurations, can produce binocular and focal depth cues. However, most of the work done in this area ignores or does not sufficiently address one of the key aspects of this problem - the performance and limitations of the human visual system.
The objective of this project is to characterise and model the performance and limitations of the human visual system when observing complex dynamic 3D scenes. The scene will span a high dynamic range (HDR) of luminance and provide binocular and focal depth cues. In technical terms, the project aims to create a visual model and difference metric for high dynamic range light fields (HDR-LFs). The visual metric will replace tedious subjective testing and provide the first automated method that can optimize encoding and processing of HDR-LF data.
Perceptually realistic video will impose enormous storage and processing requirements compared to traditional video. The bandwidth of such rich visual content will be the main bottleneck for new imaging and display technologies. Therefore, the final objective of this project is to use the new visual metric to derive an efficient and approximately perceptually uniform encoding of HDR-LFs. Such encoding will radically reduce storage and bandwidth requirements and will pave the way for future highly realistic image and video content.
Summary
One of the grand challenges of computer graphics has been to generate images indistinguishable from photographs for a naïve observer. As this challenge is mostly completed and computer generated imagery starts to replace photographs (product catalogues, special effects in cinema), the next grand challenge is to produce imagery that is indistinguishable from the real-world.
Tremendous progress in capture, manipulation and display technologies opens the potential to achieve this new challenge (at the research stage) in the next 5-10 years. Electronic displays offer sufficient resolution, frame rate, dynamic range, colour gamut and, in some configurations, can produce binocular and focal depth cues. However, most of the work done in this area ignores or does not sufficiently address one of the key aspects of this problem - the performance and limitations of the human visual system.
The objective of this project is to characterise and model the performance and limitations of the human visual system when observing complex dynamic 3D scenes. The scene will span a high dynamic range (HDR) of luminance and provide binocular and focal depth cues. In technical terms, the project aims to create a visual model and difference metric for high dynamic range light fields (HDR-LFs). The visual metric will replace tedious subjective testing and provide the first automated method that can optimize encoding and processing of HDR-LF data.
Perceptually realistic video will impose enormous storage and processing requirements compared to traditional video. The bandwidth of such rich visual content will be the main bottleneck for new imaging and display technologies. Therefore, the final objective of this project is to use the new visual metric to derive an efficient and approximately perceptually uniform encoding of HDR-LFs. Such encoding will radically reduce storage and bandwidth requirements and will pave the way for future highly realistic image and video content.
Max ERC Funding
1 868 855 €
Duration
Start date: 2017-07-01, End date: 2022-06-30
Project acronym FOGHORN
Project FOG-aided wireless networks for communication, cacHing and cOmputing: theoRetical and algorithmic fouNdations
Researcher (PI) Osvaldo SIMEONE
Host Institution (HI) KING'S COLLEGE LONDON
Call Details Consolidator Grant (CoG), PE7, ERC-2016-COG
Summary "The FOGHORN project aims at developing the theoretical and algorithmic foundations of fog-aided wireless networks. This is an emerging class of wireless systems that leverages the synergy and complementarity of cloudification and edge processing, two key technologies in the evolution towards 5G systems and beyond. Fog-aided wireless networks can reap the bene
fits of centralization via cloud processing, in terms of capital and operating cost reductions, greening, and
enhanced spectral e fficiency, while, at the same time, being able to cater to low-latency applications, such as the ""tactile"" internet, by means of localized intelligence at the network edge. The operation of fog-aided wireless networks poses novel fundamental research problems pertaining to the optimal management of the communication, caching and computing resources at the
cloud and at the edge, as well as to the transmission on the fronthaul network connecting cloud and edge. The solution of these problems challenges the theoretical principles and engineering insights which have underpinned the design of existing networks. The initial research activity on the topic, of which the EU is at the forefront, focuses, by and large, on ad hoc solutions and technologies. In contrast, the goal of this project is to develop fundamental theoretical insights
and algorithmic principles with the main aim of guiding engineering choices, unlocking new academic opportunities and disclosing new technologies. The theoretical framework is grounded in network information theory, which enables the distillation of design principles, along with signal processing, (non-convex) optimization, queuing and distributed computing to develop and analyse algorithmic solutions."
Summary
"The FOGHORN project aims at developing the theoretical and algorithmic foundations of fog-aided wireless networks. This is an emerging class of wireless systems that leverages the synergy and complementarity of cloudification and edge processing, two key technologies in the evolution towards 5G systems and beyond. Fog-aided wireless networks can reap the bene
fits of centralization via cloud processing, in terms of capital and operating cost reductions, greening, and
enhanced spectral e fficiency, while, at the same time, being able to cater to low-latency applications, such as the ""tactile"" internet, by means of localized intelligence at the network edge. The operation of fog-aided wireless networks poses novel fundamental research problems pertaining to the optimal management of the communication, caching and computing resources at the
cloud and at the edge, as well as to the transmission on the fronthaul network connecting cloud and edge. The solution of these problems challenges the theoretical principles and engineering insights which have underpinned the design of existing networks. The initial research activity on the topic, of which the EU is at the forefront, focuses, by and large, on ad hoc solutions and technologies. In contrast, the goal of this project is to develop fundamental theoretical insights
and algorithmic principles with the main aim of guiding engineering choices, unlocking new academic opportunities and disclosing new technologies. The theoretical framework is grounded in network information theory, which enables the distillation of design principles, along with signal processing, (non-convex) optimization, queuing and distributed computing to develop and analyse algorithmic solutions."
Max ERC Funding
2 318 719 €
Duration
Start date: 2017-06-01, End date: 2022-05-31
Project acronym IMBIBE
Project Innovative technology solutions to explore effects of the microbiome on intestine and brain pathophysiology
Researcher (PI) Róisín Meabh OWENS
Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Call Details Consolidator Grant (CoG), PE7, ERC-2016-COG
Summary The human gut is host to over 100 trillion bacteria that are known to be essential for human health. Intestinal microbes can affect the function of the gastrointestinal (GI) tract, via immunity, nutrient absorption, energy metabolism and intestinal barrier function. Alterations in the microbiome have been linked with many disease phenotypes including colorectal cancer, Crohn’s disease, obesity, diabetes as well as neuropathologies such as autism spectrum disorder (ASD), stress and anxiety. Animal studies remain one of the sole means of assessing the importance of microbiota on development and well-being, however the use of animals to study human systems is increasingly questioned due to ethics, cost and relevance concerns. In vitro models have developed at an accelerated pace in the past decade, benefitting from advances in cell culture (in particular 3D cell culture and use of human cell types), increasing the viability of these systems as alternatives to traditional cell culture methods. This in turn will allow refinement and replacement of animal use. In particular in basic science, or high throughput approaches where animal models are under significant pressure to be replaced, in vitro human models can be singularly appropriate. The development of in vitro models with microbiota has not yet been demonstrated even though the transformative role of the microbiota appears unquestionable. The IMBIBE project will focus on using engineering and materials science approaches to develop complete (i.e. human and microbe) in vitro models to truly capture the human situation. IMBIBE will benefit from cutting edge organic electronic technology which will allow real-time monitoring thus enabling iterative improvements in the models employed. The result from this project will be a platform to study host-microbiome interactions and consequences for pathophysiology, in particular, of the GI tract and brain.
Summary
The human gut is host to over 100 trillion bacteria that are known to be essential for human health. Intestinal microbes can affect the function of the gastrointestinal (GI) tract, via immunity, nutrient absorption, energy metabolism and intestinal barrier function. Alterations in the microbiome have been linked with many disease phenotypes including colorectal cancer, Crohn’s disease, obesity, diabetes as well as neuropathologies such as autism spectrum disorder (ASD), stress and anxiety. Animal studies remain one of the sole means of assessing the importance of microbiota on development and well-being, however the use of animals to study human systems is increasingly questioned due to ethics, cost and relevance concerns. In vitro models have developed at an accelerated pace in the past decade, benefitting from advances in cell culture (in particular 3D cell culture and use of human cell types), increasing the viability of these systems as alternatives to traditional cell culture methods. This in turn will allow refinement and replacement of animal use. In particular in basic science, or high throughput approaches where animal models are under significant pressure to be replaced, in vitro human models can be singularly appropriate. The development of in vitro models with microbiota has not yet been demonstrated even though the transformative role of the microbiota appears unquestionable. The IMBIBE project will focus on using engineering and materials science approaches to develop complete (i.e. human and microbe) in vitro models to truly capture the human situation. IMBIBE will benefit from cutting edge organic electronic technology which will allow real-time monitoring thus enabling iterative improvements in the models employed. The result from this project will be a platform to study host-microbiome interactions and consequences for pathophysiology, in particular, of the GI tract and brain.
Max ERC Funding
1 992 578 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym LEMAN
Project Deep LEarning on MANifolds and graphs
Researcher (PI) Michael Bronstein
Host Institution (HI) IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary The aim of the project is to develop a geometrically meaningful framework that allows generalizing deep learning paradigms to data on non-Euclidean domains. Such geometric data are becoming increasingly important in a variety of fields including computer graphics and vision, sensor networks, biomedicine, genomics, and computational social sciences. Existing methodologies for dealing with geometric data are limited, and a paradigm shift is needed to achieve quantitatively and qualitatively better results.
Our project is motivated by the recent dramatic success of deep learning methods in a wide range of applications, which has literally shaken the academic and industrial world. Though these methods have been known for decades, the computational power of modern computers, availability of large datasets, and efficient optimization methods allowed creating and effectively training complex models that made a qualitative breakthrough. In particular, in computer vision, deep neural networks have achieved unprecedented performance on notoriously hard problems such as object recognition. However, so far research has mainly focused on developing deep learning methods for Euclidean data such as acoustic signals, images, and videos. In fields dealing with geometric data, the adoption of deep learning has been lagging behind, primarily since the non-Euclidean nature of objects dealt with makes the very definition of basic operations used in deep networks rather elusive.
The ambition of the project is to develop geometric deep learning methods all the way from a mathematical model to an efficient and scalable software implementation, and apply them to some of today’s most important and challenging problems from the domains of computer graphics and vision, genomics, and social network analysis. We expect the proposed framework to lead to a leap in performance on several known tough problems, as well as to allow addressing new and previously unthinkable problems.
Summary
The aim of the project is to develop a geometrically meaningful framework that allows generalizing deep learning paradigms to data on non-Euclidean domains. Such geometric data are becoming increasingly important in a variety of fields including computer graphics and vision, sensor networks, biomedicine, genomics, and computational social sciences. Existing methodologies for dealing with geometric data are limited, and a paradigm shift is needed to achieve quantitatively and qualitatively better results.
Our project is motivated by the recent dramatic success of deep learning methods in a wide range of applications, which has literally shaken the academic and industrial world. Though these methods have been known for decades, the computational power of modern computers, availability of large datasets, and efficient optimization methods allowed creating and effectively training complex models that made a qualitative breakthrough. In particular, in computer vision, deep neural networks have achieved unprecedented performance on notoriously hard problems such as object recognition. However, so far research has mainly focused on developing deep learning methods for Euclidean data such as acoustic signals, images, and videos. In fields dealing with geometric data, the adoption of deep learning has been lagging behind, primarily since the non-Euclidean nature of objects dealt with makes the very definition of basic operations used in deep networks rather elusive.
The ambition of the project is to develop geometric deep learning methods all the way from a mathematical model to an efficient and scalable software implementation, and apply them to some of today’s most important and challenging problems from the domains of computer graphics and vision, genomics, and social network analysis. We expect the proposed framework to lead to a leap in performance on several known tough problems, as well as to allow addressing new and previously unthinkable problems.
Max ERC Funding
1 997 875 €
Duration
Start date: 2017-10-01, End date: 2022-09-30