Project acronym AUTAR
Project A Unified Theory of Algorithmic Relaxations
Researcher (PI) Albert Atserias Peri
Host Institution (HI) UNIVERSITAT POLITECNICA DE CATALUNYA
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary For a large family of computational problems collectively known as constrained optimization and satisfaction problems (CSPs), four decades of research in algorithms and computational complexity have led to a theory that tries to classify them as algorithmically tractable vs. intractable, i.e. polynomial-time solvable vs. NP-hard. However, there remains an important gap in our knowledge in that many CSPs of interest resist classification by this theory. Some such problems of practical relevance include fundamental partition problems in graph theory, isomorphism problems in combinatorics, and strategy-design problems in mathematical game theory. To tackle this gap in our knowledge, the research of the last decade has been driven either by finding hard instances for algorithms that solve tighter and tighter relaxations of the original problem, or by formulating new hardness-hypotheses that are stronger but admittedly less robust than NP-hardness.
The ultimate goal of this project is closing the gap between the partial progress that these approaches represent and the original classification project into tractable vs. intractable problems. Our thesis is that the field has reached a point where, in many cases of interest, the analysis of the current candidate algorithms that appear to solve all instances could suffice to classify the problem one way or the other, without the need for alternative hardness-hypotheses. The novelty in our approach is a program to develop our recent discovery that, in some cases of interest, two methods from different areas match in strength: indistinguishability pebble games from mathematical logic, and hierarchies of convex relaxations from mathematical programming. Thus, we aim at making significant advances in the status of important algorithmic problems by looking for a general theory that unifies and goes beyond the current understanding of its components.
Summary
For a large family of computational problems collectively known as constrained optimization and satisfaction problems (CSPs), four decades of research in algorithms and computational complexity have led to a theory that tries to classify them as algorithmically tractable vs. intractable, i.e. polynomial-time solvable vs. NP-hard. However, there remains an important gap in our knowledge in that many CSPs of interest resist classification by this theory. Some such problems of practical relevance include fundamental partition problems in graph theory, isomorphism problems in combinatorics, and strategy-design problems in mathematical game theory. To tackle this gap in our knowledge, the research of the last decade has been driven either by finding hard instances for algorithms that solve tighter and tighter relaxations of the original problem, or by formulating new hardness-hypotheses that are stronger but admittedly less robust than NP-hardness.
The ultimate goal of this project is closing the gap between the partial progress that these approaches represent and the original classification project into tractable vs. intractable problems. Our thesis is that the field has reached a point where, in many cases of interest, the analysis of the current candidate algorithms that appear to solve all instances could suffice to classify the problem one way or the other, without the need for alternative hardness-hypotheses. The novelty in our approach is a program to develop our recent discovery that, in some cases of interest, two methods from different areas match in strength: indistinguishability pebble games from mathematical logic, and hierarchies of convex relaxations from mathematical programming. Thus, we aim at making significant advances in the status of important algorithmic problems by looking for a general theory that unifies and goes beyond the current understanding of its components.
Max ERC Funding
1 725 656 €
Duration
Start date: 2015-06-01, End date: 2020-05-31
Project acronym CONCERT
Project Description of information transfer across macromolecules by concerted conformational changes
Researcher (PI) Xavier Salvatella Giralt
Host Institution (HI) FUNDACIO INSTITUT DE RECERCA BIOMEDICA (IRB BARCELONA)
Call Details Consolidator Grant (CoG), PE4, ERC-2014-CoG
Summary Signal transduction in biology relies on the transfer of information across biomolecules by concerted conformational changes that cannot currently be characterized experimentally at high resolution. In CONCERT we will develop a method based on the use of nuclear magnetic resonance spectroscopy in solution that will provide very detailed descriptions of such changes by using the information about structural heterogeneity contained in a parameter that is exquisitely sensitive to molecular shape called residual dipolar coupling measured in steric alignment. To show how this new method will allow the study of information transfer we will determine conformational ensembles that will report on the intra and inter-domain concerted conformational changes that activate the androgen receptor, a large allosteric multi-domain protein that regulates the male phenotype and is a therapeutic target for castration resistant prostate cancer, the condition suffered by prostate cancer patients that have become refractory to hormone therapy, the first line of treatment for this disease. To complement the structural information obtained by nuclear magnetic resonance and, especially, measure the rate of information transfer across the androgen receptor we will carry out in a collaborative fashion high precision single molecule Förster resonance energy transfer and fluorescence correlation spectroscopy experiments on AR constructs labelled with fluorescent dyes. In summary we will develop a method that will make it possible to describe some of the most fascinating biological phenomena, such as allostery and signal transduction, and will, in the long term, be an instrument for the discovery of drugs to treat castration resistant prostate cancer, a late stage of prostate cancer that is incurable and kills ca. 70.000 European men every year.
Summary
Signal transduction in biology relies on the transfer of information across biomolecules by concerted conformational changes that cannot currently be characterized experimentally at high resolution. In CONCERT we will develop a method based on the use of nuclear magnetic resonance spectroscopy in solution that will provide very detailed descriptions of such changes by using the information about structural heterogeneity contained in a parameter that is exquisitely sensitive to molecular shape called residual dipolar coupling measured in steric alignment. To show how this new method will allow the study of information transfer we will determine conformational ensembles that will report on the intra and inter-domain concerted conformational changes that activate the androgen receptor, a large allosteric multi-domain protein that regulates the male phenotype and is a therapeutic target for castration resistant prostate cancer, the condition suffered by prostate cancer patients that have become refractory to hormone therapy, the first line of treatment for this disease. To complement the structural information obtained by nuclear magnetic resonance and, especially, measure the rate of information transfer across the androgen receptor we will carry out in a collaborative fashion high precision single molecule Förster resonance energy transfer and fluorescence correlation spectroscopy experiments on AR constructs labelled with fluorescent dyes. In summary we will develop a method that will make it possible to describe some of the most fascinating biological phenomena, such as allostery and signal transduction, and will, in the long term, be an instrument for the discovery of drugs to treat castration resistant prostate cancer, a late stage of prostate cancer that is incurable and kills ca. 70.000 European men every year.
Max ERC Funding
1 950 000 €
Duration
Start date: 2015-07-01, End date: 2020-06-30
Project acronym D-SynMA
Project Distributed Synthesis: from Single to Multiple Agents
Researcher (PI) Nir PITERMAN
Host Institution (HI) GOETEBORGS UNIVERSITET
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary Computing is changing from living on our desktops and in dedicated devices to being everywhere. In phones, sensors, appliances, and robots – computers (from now on devices) are everywhere and affecting all aspects of our lives. The techniques to make them safe and reliable are investigated and are starting to emerge and consolidate. However, these techniques enable devices to work in isolation or co-exist. We currently do not have techniques that enable development of real autonomous collaboration between devices. Such techniques will revolutionize all usage of devices and, as consequence, our lives. Manufacturing, supply chain, transportation, infrastructures, and earth- and space exploration would all transform using techniques that enable development of collaborating devices.
When considering isolated (and co-existing) devices, reactive synthesis – automatic production of plans from high level specification – is emerging as a viable tool for the development of robots and reactive software. This is especially important in the context of safety-critical systems, where assurances are required and systems need to have guarantees on performance. The techniques that are developed today to support robust, assured, reliable, and adaptive devices rely on a major change in focus of reactive synthesis. The revolution of correct-by-construction systems from specifications is occurring and is being pushed forward.
However, to take this approach forward to work also for real collaboration between devices the theoretical frameworks that will enable distributed synthesis are required. Such foundations will enable the correct-by-construction revolution to unleash its potential and allow a multiplicative increase of utility by cooperative computation.
d-SynMA will take distributed synthesis to this new frontier by considering novel interaction and communication concepts that would create an adaptable framework of correct-by-construction application of collaborating devices.
Summary
Computing is changing from living on our desktops and in dedicated devices to being everywhere. In phones, sensors, appliances, and robots – computers (from now on devices) are everywhere and affecting all aspects of our lives. The techniques to make them safe and reliable are investigated and are starting to emerge and consolidate. However, these techniques enable devices to work in isolation or co-exist. We currently do not have techniques that enable development of real autonomous collaboration between devices. Such techniques will revolutionize all usage of devices and, as consequence, our lives. Manufacturing, supply chain, transportation, infrastructures, and earth- and space exploration would all transform using techniques that enable development of collaborating devices.
When considering isolated (and co-existing) devices, reactive synthesis – automatic production of plans from high level specification – is emerging as a viable tool for the development of robots and reactive software. This is especially important in the context of safety-critical systems, where assurances are required and systems need to have guarantees on performance. The techniques that are developed today to support robust, assured, reliable, and adaptive devices rely on a major change in focus of reactive synthesis. The revolution of correct-by-construction systems from specifications is occurring and is being pushed forward.
However, to take this approach forward to work also for real collaboration between devices the theoretical frameworks that will enable distributed synthesis are required. Such foundations will enable the correct-by-construction revolution to unleash its potential and allow a multiplicative increase of utility by cooperative computation.
d-SynMA will take distributed synthesis to this new frontier by considering novel interaction and communication concepts that would create an adaptable framework of correct-by-construction application of collaborating devices.
Max ERC Funding
1 871 272 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym ELECNANO
Project Electrically Tunable Functional Lanthanide Nanoarchitectures on Surfaces
Researcher (PI) DAVID ECIJA FERNANDEZ
Host Institution (HI) FUNDACION IMDEA NANOCIENCIA
Call Details Consolidator Grant (CoG), PE4, ERC-2017-COG
Summary Lanthanide metals are ubiquitous nowadays, finding use in luminescent materials, optical amplifiers and waveguides, lasers, photovoltaics, rechargeable batteries, catalysts, alloys, magnets, bio-probes, and therapeutic agents. In addition, they bear potential for high temperature superconductivity, magnetic refrigeration, molecular magnetic storage, spintronics and quantum information.
Surprisingly, the study of lanthanide physico-chemical properties on surfaces is at its infancy, particularly at the nanoscale. To address this extraordinary scientific opportunity, I will research the foundations and prospects of lanthanide elements to design functional nanoarchitectures on surfaces and I will study their inherent physico-chemical phenomena in distinct coordination environments, targeting novel approaches for sensing, nanomagnetism and electroluminescence. Importantly, our studies will encompass both metal substrates and decoupling surfaces including ultra-thin film insulators and graphene. Nurturing from these studies and in parallel, we will focus on graphene voltage back-gated supports, thus surpassing the seminal knowledge on electrically-inert substrates and enhancing the scope of our research to address the overarching objective of the proposal, i.e., the design of electrically tunable functional lanthanide nanomaterials.
The culmination of ELECNANO project will provide strategies for:
1.-Design of functional nanomaterials on high-technological supports.
2.-Development of advanced coordination chemistry on surfaces.
3.-Rationale of the physico-chemical properties of lanthanide-coordination environments.
4.-Engineering of lanthanide nanoarchitectures for ultimate sensing, nanomagnetism and electroluminescence.
5.-In-situ atomistic views of electrically tunable materials and unprecedented fundamental studies of charge-molecule/metal physics on devices.
Summary
Lanthanide metals are ubiquitous nowadays, finding use in luminescent materials, optical amplifiers and waveguides, lasers, photovoltaics, rechargeable batteries, catalysts, alloys, magnets, bio-probes, and therapeutic agents. In addition, they bear potential for high temperature superconductivity, magnetic refrigeration, molecular magnetic storage, spintronics and quantum information.
Surprisingly, the study of lanthanide physico-chemical properties on surfaces is at its infancy, particularly at the nanoscale. To address this extraordinary scientific opportunity, I will research the foundations and prospects of lanthanide elements to design functional nanoarchitectures on surfaces and I will study their inherent physico-chemical phenomena in distinct coordination environments, targeting novel approaches for sensing, nanomagnetism and electroluminescence. Importantly, our studies will encompass both metal substrates and decoupling surfaces including ultra-thin film insulators and graphene. Nurturing from these studies and in parallel, we will focus on graphene voltage back-gated supports, thus surpassing the seminal knowledge on electrically-inert substrates and enhancing the scope of our research to address the overarching objective of the proposal, i.e., the design of electrically tunable functional lanthanide nanomaterials.
The culmination of ELECNANO project will provide strategies for:
1.-Design of functional nanomaterials on high-technological supports.
2.-Development of advanced coordination chemistry on surfaces.
3.-Rationale of the physico-chemical properties of lanthanide-coordination environments.
4.-Engineering of lanthanide nanoarchitectures for ultimate sensing, nanomagnetism and electroluminescence.
5.-In-situ atomistic views of electrically tunable materials and unprecedented fundamental studies of charge-molecule/metal physics on devices.
Max ERC Funding
1 994 713 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym Hi-EST
Project Holistic Integration of Emerging Supercomputing Technologies
Researcher (PI) David Carrera Perez
Host Institution (HI) BARCELONA SUPERCOMPUTING CENTER - CENTRO NACIONAL DE SUPERCOMPUTACION
Call Details Starting Grant (StG), PE6, ERC-2014-STG
Summary Hi-EST aims to address a new class of placement problem, a challenge for computational sciences that consists in mapping workloads on top of hardware resources with the goal to maximise the performance of workloads and the utilization of resources. The objective of the placement problem is to perform a more efficient management of the computing infrastructure by continuously adjusting the number and type of resources allocated to each workload.
Placement, in this context, is well known for being NP-hard, and resembles the multi-dimensional knapsack problem. Heuristics have been used in the past for different domains, providing vertical solutions that cannot be generalised. When the workload mix is heterogeneous and the infrastructure hybrid, the problem becomes even more challenging. This is the problem that Hi-EST plans to address. The approach followed will build on top of four research pillars: supervised learning of the placement properties, placement algorithms for tasks, placement algorithms for data, and software defined environments for placement enforcement.
Hi-EST plans to advance research frontiers in four different areas: 1) Adaptive Learning Algorithms: by proposing the first known use of Deep Learning techniques for guiding task and data placement decisions; 2) Task Placement: by proposing the first known algorithm to map heterogeneous sets of tasks on top of systems enabled with Active Storage capabilities, and by extending unifying performance models for heterogeneous workloads to cover and unprecedented number of workload types; 3) Data Placement: by proposing the first known algorithm used to map data on top of heterogeneous sets of key/value stores connected to Active Storage technologies; and 4) Software Defined Environments (SDE): by extending SDE description languages with a still inexistent vocabulary to describe Supercomputing workloads that will be leveraged to combine data and task placement into one single decision-making process.
Summary
Hi-EST aims to address a new class of placement problem, a challenge for computational sciences that consists in mapping workloads on top of hardware resources with the goal to maximise the performance of workloads and the utilization of resources. The objective of the placement problem is to perform a more efficient management of the computing infrastructure by continuously adjusting the number and type of resources allocated to each workload.
Placement, in this context, is well known for being NP-hard, and resembles the multi-dimensional knapsack problem. Heuristics have been used in the past for different domains, providing vertical solutions that cannot be generalised. When the workload mix is heterogeneous and the infrastructure hybrid, the problem becomes even more challenging. This is the problem that Hi-EST plans to address. The approach followed will build on top of four research pillars: supervised learning of the placement properties, placement algorithms for tasks, placement algorithms for data, and software defined environments for placement enforcement.
Hi-EST plans to advance research frontiers in four different areas: 1) Adaptive Learning Algorithms: by proposing the first known use of Deep Learning techniques for guiding task and data placement decisions; 2) Task Placement: by proposing the first known algorithm to map heterogeneous sets of tasks on top of systems enabled with Active Storage capabilities, and by extending unifying performance models for heterogeneous workloads to cover and unprecedented number of workload types; 3) Data Placement: by proposing the first known algorithm used to map data on top of heterogeneous sets of key/value stores connected to Active Storage technologies; and 4) Software Defined Environments (SDE): by extending SDE description languages with a still inexistent vocabulary to describe Supercomputing workloads that will be leveraged to combine data and task placement into one single decision-making process.
Max ERC Funding
1 467 783 €
Duration
Start date: 2015-05-01, End date: 2020-04-30
Project acronym ICON-BIO
Project Integrated Connectedness for a New Representation of Biology
Researcher (PI) Natasa PRZULJ
Host Institution (HI) BARCELONA SUPERCOMPUTING CENTER - CENTRO NACIONAL DE SUPERCOMPUTACION
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary The aim of the project is to develop a comprehensive framework for generalizing network analytics and fusion paradigms of non-negative matrix factorization to medical data. Heterogeneous, interconnected, systems-level omics data are becoming increasingly available and important in precision medicine. We are seeking to better stratify and subtype patients into risk groups, discover new biomarkers for complex and rare diseases, personalize medical treatment based on genomics and exposures of an individual, and repurpose known drugs to different patient groups. Existing methodologies for dealing with these big data are limited and a paradigm shift is needed to achieve quantitatively and qualitatively better results.
The project is motivated by the recent success of non-negative matrix tri-factorization (NMTF) based methods for fusion of heterogeneous data in biomedicine. Though these methods have been known for some time, the availability of large datasets, coupled with modern computational power and efficient optimization methods, allowed for creation and efficient training of complex models that can make a qualitative breakthrough. For example, NMTF has recently achieved unprecedented performance on exceptionally hard problems of simultaneously utilizing the wealth of diverse molecular and clinical data in precision medicine. However, research thus far has been limited to special variants of this problem and used only fixed point methods to address these exciting examples of hard non-convex high-dimensional non-linear optimization problems.
The ambition of the project is to develop general data fusion methods, from mathematical models to efficient and scalable software implementation, and apply them to the domain of biomedical informatics. The project will lead to a paradigm shift in biomedical and computational understanding of data and diseases that will open up ways to solving some of the major bottlenecks in precision medicine and other domains.
Summary
The aim of the project is to develop a comprehensive framework for generalizing network analytics and fusion paradigms of non-negative matrix factorization to medical data. Heterogeneous, interconnected, systems-level omics data are becoming increasingly available and important in precision medicine. We are seeking to better stratify and subtype patients into risk groups, discover new biomarkers for complex and rare diseases, personalize medical treatment based on genomics and exposures of an individual, and repurpose known drugs to different patient groups. Existing methodologies for dealing with these big data are limited and a paradigm shift is needed to achieve quantitatively and qualitatively better results.
The project is motivated by the recent success of non-negative matrix tri-factorization (NMTF) based methods for fusion of heterogeneous data in biomedicine. Though these methods have been known for some time, the availability of large datasets, coupled with modern computational power and efficient optimization methods, allowed for creation and efficient training of complex models that can make a qualitative breakthrough. For example, NMTF has recently achieved unprecedented performance on exceptionally hard problems of simultaneously utilizing the wealth of diverse molecular and clinical data in precision medicine. However, research thus far has been limited to special variants of this problem and used only fixed point methods to address these exciting examples of hard non-convex high-dimensional non-linear optimization problems.
The ambition of the project is to develop general data fusion methods, from mathematical models to efficient and scalable software implementation, and apply them to the domain of biomedical informatics. The project will lead to a paradigm shift in biomedical and computational understanding of data and diseases that will open up ways to solving some of the major bottlenecks in precision medicine and other domains.
Max ERC Funding
2 000 000 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym NanoBioNext
Project Nanoscale Biomeasurements of Nerve Cells and Vesicles: Molecular Substructure and the Nature of Exocytosis
Researcher (PI) Andrew EWING
Host Institution (HI) GOETEBORGS UNIVERSITET
Call Details Advanced Grant (AdG), PE4, ERC-2017-ADG
Summary I propose to develop and apply state of the art analytical methods to investigate cell membrane and vesicle substructure to elucidate the chemistry of the closing regulatory phase of individual exocytosis events. The general goal of this proposal is to develop a new brand of analytical nanoelectrochemistry (nanogap and nanopore electrochemical cytometry), combined with chemical nanoscopy imaging methods with STED and nanoscale mass spectrometry imaging. I propose to apply this to the questions of the nature of exocytosis and the chemistry that initiates the process of a short-term memory. We have recently discovered that most neurotransmitter release is partial via an open and closed vesicle release process and this allows new mechanisms of plasticity and synaptic strength to be hypothesized. I propose to (i) test if partial release is ubiquitous phenomenon, (ii) develop new nanoscale analytical methods to measure exocytotic release from pancreatic beta cells and a neuron in Drosophila, and to elucidate the substructure of nanometer vesicles, (iii) use these analytical methods in model cells and neurons to test the hypothesis that lipid membrane changes are involved in the initiation of the chemical events leading to short-term memory, and (iv) test the effects of drugs and zinc on plasticity of vesicles and exocytosis. This work combines new method development with a revolutionary application of chemical analysis to test the hypothesis that lipids play a previously unanticipated role in synaptic plasticity and the chemical structures involved in the initiation of short-term memory. As long-term impact, this will provide sensitive analytical tools to understand how changes in these chemical species might be affected in relation to diseases involving short-term memory loss.
Summary
I propose to develop and apply state of the art analytical methods to investigate cell membrane and vesicle substructure to elucidate the chemistry of the closing regulatory phase of individual exocytosis events. The general goal of this proposal is to develop a new brand of analytical nanoelectrochemistry (nanogap and nanopore electrochemical cytometry), combined with chemical nanoscopy imaging methods with STED and nanoscale mass spectrometry imaging. I propose to apply this to the questions of the nature of exocytosis and the chemistry that initiates the process of a short-term memory. We have recently discovered that most neurotransmitter release is partial via an open and closed vesicle release process and this allows new mechanisms of plasticity and synaptic strength to be hypothesized. I propose to (i) test if partial release is ubiquitous phenomenon, (ii) develop new nanoscale analytical methods to measure exocytotic release from pancreatic beta cells and a neuron in Drosophila, and to elucidate the substructure of nanometer vesicles, (iii) use these analytical methods in model cells and neurons to test the hypothesis that lipid membrane changes are involved in the initiation of the chemical events leading to short-term memory, and (iv) test the effects of drugs and zinc on plasticity of vesicles and exocytosis. This work combines new method development with a revolutionary application of chemical analysis to test the hypothesis that lipids play a previously unanticipated role in synaptic plasticity and the chemical structures involved in the initiation of short-term memory. As long-term impact, this will provide sensitive analytical tools to understand how changes in these chemical species might be affected in relation to diseases involving short-term memory loss.
Max ERC Funding
2 500 000 €
Duration
Start date: 2018-08-01, End date: 2023-07-31
Project acronym P2PMODELS
Project Decentralized Blockchain-based Organizations for Bootstrapping the Collaborative Economy
Researcher (PI) Samer Hassan Collado
Host Institution (HI) UNIVERSIDAD COMPLUTENSE DE MADRID
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary The Collaborative Economy (CE) is rapidly expanding through new forms of Internet labor and commerce, from Wikipedia to Kickstarter and Airbnb. However, it suffers from 3 main challenges: (1) Infrastructure: centralized surveillance that the central hubs of information exercise over their users, (2) Governance: disempowered communities which do not have any decision-making influence over the platform, and (3) Economy: concentration of profits in a few major players who do not proportionally redistribute them to the contributors.
How can CE software platforms be implemented for solving these challenges? P2PMODELS explores a new way of building CE software platforms harnessing the blockchain, an emerging technology that enables autonomous agent-mediated organizations, in order to (1) provide a software framework to build decentralized infrastructure for Collaborative Economy organizations that do not depend on central authorities, (2) enable democratic-by-design models of governance for communities, by encoding rules directly into the software platform, and (3) enable fairer value distribution models, thus improving the economic sustainability of both CE contributors and organizations.
Together, these 3 objectives will bootstrap the emergence of a new generation of self-governed and more economically sustainable peer-to-peer CE communities. The interdisciplinary nature of P2PMODELS will open a new research field around agent-mediated organizations for collaborative communities and their self-enforcing rules for automatic governance and economic rewarding. Bringing this proposal to life requires a funding scheme compatible with a high-risk/high-gain vision to finance a fully dedicated and highly motivated research team with multidisciplinary skills.
Summary
The Collaborative Economy (CE) is rapidly expanding through new forms of Internet labor and commerce, from Wikipedia to Kickstarter and Airbnb. However, it suffers from 3 main challenges: (1) Infrastructure: centralized surveillance that the central hubs of information exercise over their users, (2) Governance: disempowered communities which do not have any decision-making influence over the platform, and (3) Economy: concentration of profits in a few major players who do not proportionally redistribute them to the contributors.
How can CE software platforms be implemented for solving these challenges? P2PMODELS explores a new way of building CE software platforms harnessing the blockchain, an emerging technology that enables autonomous agent-mediated organizations, in order to (1) provide a software framework to build decentralized infrastructure for Collaborative Economy organizations that do not depend on central authorities, (2) enable democratic-by-design models of governance for communities, by encoding rules directly into the software platform, and (3) enable fairer value distribution models, thus improving the economic sustainability of both CE contributors and organizations.
Together, these 3 objectives will bootstrap the emergence of a new generation of self-governed and more economically sustainable peer-to-peer CE communities. The interdisciplinary nature of P2PMODELS will open a new research field around agent-mediated organizations for collaborative communities and their self-enforcing rules for automatic governance and economic rewarding. Bringing this proposal to life requires a funding scheme compatible with a high-risk/high-gain vision to finance a fully dedicated and highly motivated research team with multidisciplinary skills.
Max ERC Funding
1 498 855 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym ProtonPump
Project Structural mechanism coupling the reduction of oxygen to proton pumping in living cells
Researcher (PI) Richard Neutze
Host Institution (HI) GOETEBORGS UNIVERSITET
Call Details Advanced Grant (AdG), PE4, ERC-2017-ADG
Summary Every breath you take delivers oxygen to mitochondria within the cells of your body. Mitochondria are energy transducing organelles that accept electrons liberated from the food that you eat in order to generate a transmembrane proton concentration gradient. Cytochrome c oxidase is an integral membrane protein complex in the mitochondria that accepts four electrons and reduces molecular oxygen to two water molecules while simultaneously pumping protons against a transmembrane potential. Cytochrome c oxidase homologues are found in almost all living organisms. Because oxygen is the final destination of the transferred electrons, this enzyme family is referred to as the terminal oxidases. Crystal structures of terminal oxidases have been known for more than two decades and these enzymes have been studied with virtually all biophysical and biochemical methods. Despite this scrutiny, it is unknown how redox reactions at the enzyme’s active site are coupled to proton pumping. Here I aim to create a three dimensional movie that reveals how proton exchange between key amino acid residues is controlled by the movements of electrons within the enzyme. This work will utilize state-of-the-art methods of time-resolved serial crystallography, time-resolved wide angle X-ray scattering and time-resolved X-ray emission spectroscopy at European X-ray free electron lasers (XFELs) and synchrotron radiation facilities to observe structural changes in terminal oxidases with time. I will develop new approaches for rapidly delivering oxygen or electrons into the protein’s active site in order to initiate the catalytic cycle in microcrystals and in solution. This project will yield completely new insight into one of the most important chemical reactions in biology while opening up the field of time-resolved structural studies of proteins beyond a handful of naturally occurring light-driven systems.
Summary
Every breath you take delivers oxygen to mitochondria within the cells of your body. Mitochondria are energy transducing organelles that accept electrons liberated from the food that you eat in order to generate a transmembrane proton concentration gradient. Cytochrome c oxidase is an integral membrane protein complex in the mitochondria that accepts four electrons and reduces molecular oxygen to two water molecules while simultaneously pumping protons against a transmembrane potential. Cytochrome c oxidase homologues are found in almost all living organisms. Because oxygen is the final destination of the transferred electrons, this enzyme family is referred to as the terminal oxidases. Crystal structures of terminal oxidases have been known for more than two decades and these enzymes have been studied with virtually all biophysical and biochemical methods. Despite this scrutiny, it is unknown how redox reactions at the enzyme’s active site are coupled to proton pumping. Here I aim to create a three dimensional movie that reveals how proton exchange between key amino acid residues is controlled by the movements of electrons within the enzyme. This work will utilize state-of-the-art methods of time-resolved serial crystallography, time-resolved wide angle X-ray scattering and time-resolved X-ray emission spectroscopy at European X-ray free electron lasers (XFELs) and synchrotron radiation facilities to observe structural changes in terminal oxidases with time. I will develop new approaches for rapidly delivering oxygen or electrons into the protein’s active site in order to initiate the catalytic cycle in microcrystals and in solution. This project will yield completely new insight into one of the most important chemical reactions in biology while opening up the field of time-resolved structural studies of proteins beyond a handful of naturally occurring light-driven systems.
Max ERC Funding
2 500 000 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym SEDAL
Project Statistical Learning for Earth Observation Data Analysis.
Researcher (PI) Gustau Camps-Valls
Host Institution (HI) UNIVERSITAT DE VALENCIA
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary SEDAL is an interdisciplinary project that aims to develop novel statistical learning methods to analyze Earth Observation (EO) satellite data. In the last decade, machine learning models have helped to monitor land, oceans, and atmosphere through the analysis and estimation of climate and biophysical parameters. Current approaches, however, cannot deal efficiently with the particular characteristics of remote sensing data. In the coming few years, this problem will largely increase: several satellite missions, such as the operational EU Copernicus Sentinels, will be launched, and we will face the urgent need to process and understand huge amounts of complex, heterogeneous, multisource, and structured data to monitor the rapid changes already occurring in our Planet.
SEDAL aims to develop the next generation of statistical inference methods for EO data analysis. We will develop advanced regression methods to improve efficiency, prediction accuracy and uncertainties, encode physical knowledge about the problem, and attain self-explanatory models learned from empirical data. Even more importantly, we will learn graphical causal models to explain the potentially complex interactions between key observed variables, and discover hidden essential drivers and confounding factors. This project will thus aboard the fundamental problem of moving from correlation to dependence and then to causation through EO data analysis. The theoretical developments will be guided by the challenging problems of estimating biophysical parameters and learning causal relations at both local and global planetary scales.
The long-term vision of SEDAL is tied to open new frontiers and foster research towards algorithms capable of discovering knowledge from EO data, a stepping stone before the more ambitious far-end goal of machine reasoning of anthropogenic climate change.
Summary
SEDAL is an interdisciplinary project that aims to develop novel statistical learning methods to analyze Earth Observation (EO) satellite data. In the last decade, machine learning models have helped to monitor land, oceans, and atmosphere through the analysis and estimation of climate and biophysical parameters. Current approaches, however, cannot deal efficiently with the particular characteristics of remote sensing data. In the coming few years, this problem will largely increase: several satellite missions, such as the operational EU Copernicus Sentinels, will be launched, and we will face the urgent need to process and understand huge amounts of complex, heterogeneous, multisource, and structured data to monitor the rapid changes already occurring in our Planet.
SEDAL aims to develop the next generation of statistical inference methods for EO data analysis. We will develop advanced regression methods to improve efficiency, prediction accuracy and uncertainties, encode physical knowledge about the problem, and attain self-explanatory models learned from empirical data. Even more importantly, we will learn graphical causal models to explain the potentially complex interactions between key observed variables, and discover hidden essential drivers and confounding factors. This project will thus aboard the fundamental problem of moving from correlation to dependence and then to causation through EO data analysis. The theoretical developments will be guided by the challenging problems of estimating biophysical parameters and learning causal relations at both local and global planetary scales.
The long-term vision of SEDAL is tied to open new frontiers and foster research towards algorithms capable of discovering knowledge from EO data, a stepping stone before the more ambitious far-end goal of machine reasoning of anthropogenic climate change.
Max ERC Funding
1 716 954 €
Duration
Start date: 2015-09-01, End date: 2020-08-31