Project acronym ACCOPT
Project ACelerated COnvex OPTimization
Researcher (PI) Yurii NESTEROV
Host Institution (HI) UNIVERSITE CATHOLIQUE DE LOUVAIN
Call Details Advanced Grant (AdG), PE1, ERC-2017-ADG
Summary The amazing rate of progress in the computer technologies and telecommunications presents many new challenges for Optimization Theory. New problems are usually very big in size, very special in structure and possibly have a distributed data support. This makes them unsolvable by the standard optimization methods. In these situations, old theoretical models, based on the hidden Black-Box information, cannot work. New theoretical and algorithmic solutions are urgently needed. In this project we will concentrate on development of fast optimization methods for problems of big and very big size. All the new methods will be endowed with provable efficiency guarantees for large classes of optimization problems, arising in practical applications. Our main tool is the acceleration technique developed for the standard Black-Box methods as applied to smooth convex functions. However, we will have to adapt it to deal with different situations.
The first line of development will be based on the smoothing technique as applied to a non-smooth functions. We propose to substantially extend this approach to generate approximate solutions in relative scale. The second line of research will be related to applying acceleration techniques to the second-order methods minimizing functions with sparse Hessians. Finally, we aim to develop fast gradient methods for huge-scale problems. The size of these problems is so big that even the usual vector operations are extremely expensive. Thus, we propose to develop new methods with sublinear iteration costs. In our approach, the main source for achieving improvements will be the proper use of problem structure.
Our overall aim is to be able to solve in a routine way many important problems, which currently look unsolvable. Moreover, the theoretical development of Convex Optimization will reach the state, when there is no gap between theory and practice: the theoretically most efficient methods will definitely outperform any homebred heuristics.
Summary
The amazing rate of progress in the computer technologies and telecommunications presents many new challenges for Optimization Theory. New problems are usually very big in size, very special in structure and possibly have a distributed data support. This makes them unsolvable by the standard optimization methods. In these situations, old theoretical models, based on the hidden Black-Box information, cannot work. New theoretical and algorithmic solutions are urgently needed. In this project we will concentrate on development of fast optimization methods for problems of big and very big size. All the new methods will be endowed with provable efficiency guarantees for large classes of optimization problems, arising in practical applications. Our main tool is the acceleration technique developed for the standard Black-Box methods as applied to smooth convex functions. However, we will have to adapt it to deal with different situations.
The first line of development will be based on the smoothing technique as applied to a non-smooth functions. We propose to substantially extend this approach to generate approximate solutions in relative scale. The second line of research will be related to applying acceleration techniques to the second-order methods minimizing functions with sparse Hessians. Finally, we aim to develop fast gradient methods for huge-scale problems. The size of these problems is so big that even the usual vector operations are extremely expensive. Thus, we propose to develop new methods with sublinear iteration costs. In our approach, the main source for achieving improvements will be the proper use of problem structure.
Our overall aim is to be able to solve in a routine way many important problems, which currently look unsolvable. Moreover, the theoretical development of Convex Optimization will reach the state, when there is no gap between theory and practice: the theoretically most efficient methods will definitely outperform any homebred heuristics.
Max ERC Funding
2 090 038 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym ADAPTEM
Project Adaptive transmission electron microscopy: development of a programmable phase plate
Researcher (PI) Johan VERBEECK
Host Institution (HI) UNIVERSITEIT ANTWERPEN
Call Details Proof of Concept (PoC), ERC-2017-PoC
Summary Adaptive optics, the technology to dynamically program the phase of optical waves has sparked an avalanche of scientific discoveries and innovations in light optics applications. Nowadays, the phase of optical waves can be dynamically programmed providing research on exotic optical beams and unprecedented control over the performance of optical instruments. Although electron waves carry many similarities in comparison to their optical counterparts, a generic programmable phase plate for electrons is still missing. This project aims at developing a prototype of a programmable electrostatic phase plate that allows the user to freely change the phase of electron waves and demonstrate it to potential licensees for further upscaling and introduction to the market. The target of this POC project is the realization of a tunable easy-to-use 5x5-pixel prototype that will demonstrate the potential of adaptive optics in electron microscopy. Its realization will be based on lithographic technology to allow for future upscaling. It is expected that such a phase plate can dramatically increase the information obtained at a given electron dose, limiting the detrimental effects of beam damage that currently hinders the use of electron microscopy in e.g. life sciences. As such, it has the potential to disrupt the electron microscopy market with novel applications while at the same time reducing cost and complexity and increasing the potential for fully automated instruments.
Summary
Adaptive optics, the technology to dynamically program the phase of optical waves has sparked an avalanche of scientific discoveries and innovations in light optics applications. Nowadays, the phase of optical waves can be dynamically programmed providing research on exotic optical beams and unprecedented control over the performance of optical instruments. Although electron waves carry many similarities in comparison to their optical counterparts, a generic programmable phase plate for electrons is still missing. This project aims at developing a prototype of a programmable electrostatic phase plate that allows the user to freely change the phase of electron waves and demonstrate it to potential licensees for further upscaling and introduction to the market. The target of this POC project is the realization of a tunable easy-to-use 5x5-pixel prototype that will demonstrate the potential of adaptive optics in electron microscopy. Its realization will be based on lithographic technology to allow for future upscaling. It is expected that such a phase plate can dramatically increase the information obtained at a given electron dose, limiting the detrimental effects of beam damage that currently hinders the use of electron microscopy in e.g. life sciences. As such, it has the potential to disrupt the electron microscopy market with novel applications while at the same time reducing cost and complexity and increasing the potential for fully automated instruments.
Max ERC Funding
148 500 €
Duration
Start date: 2018-03-01, End date: 2019-08-31
Project acronym ADMIRE
Project A holographic microscope for the immersive exploration of augmented micro-reality
Researcher (PI) Roberto DI LEONARDO
Host Institution (HI) UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA
Call Details Proof of Concept (PoC), ERC-2017-PoC
Summary Virtual reality, augmented reality and mixed reality are beginning to transform the way we explore and acquire information from the macroscopic world around us. At the same time, recent advances in holographic microscopy are providing new tools for the 3D imaging of physical and biological phenomena occurring at the micron scale. Project ADMIRE will combine this two emerging technologies into the first prototype of an AugmenteD MIcro-REality system for the immersive exploration and the quantitative analysis of three-dimensional processes at the micron scale.
The core of the proposed system will be the three-axis holographic microscope (3DHM) developed within the ERC Project SMART to investigate fast 3D dynamics of swimming bacteria.
ADMIRE project will transform 3DHM from a laboratory technique, targeted to a specific application and operated by highly specialised researchers into a general purpose instrument composed of a compact add-on module for commercial optical microscopes and a virtual reality interface allowing for a direct and intuitive use. Through the ADMIRE Holographic Microscope (ADMIRE-HM) the user will be “shrunk” a million times and virtually sent into a live 3D reconstruction of the real microscopic world contained in the glass slide. There he will find himself surrounded by micro-particles or moving cells that could be inspected from multiple directions and characterized by shape parameters (e.g. size, volume, aspect-ratio) or dynamical features (e.g. flagellar motility, sedimentation velocity, transport in a flow) obtained by means of simple and direct gestures.
The expected outcome of the project is to bring to a development stage TRL 6-7 a technology that could change the way we experience the microscopic world in basic research, biomedical applications and education.
Summary
Virtual reality, augmented reality and mixed reality are beginning to transform the way we explore and acquire information from the macroscopic world around us. At the same time, recent advances in holographic microscopy are providing new tools for the 3D imaging of physical and biological phenomena occurring at the micron scale. Project ADMIRE will combine this two emerging technologies into the first prototype of an AugmenteD MIcro-REality system for the immersive exploration and the quantitative analysis of three-dimensional processes at the micron scale.
The core of the proposed system will be the three-axis holographic microscope (3DHM) developed within the ERC Project SMART to investigate fast 3D dynamics of swimming bacteria.
ADMIRE project will transform 3DHM from a laboratory technique, targeted to a specific application and operated by highly specialised researchers into a general purpose instrument composed of a compact add-on module for commercial optical microscopes and a virtual reality interface allowing for a direct and intuitive use. Through the ADMIRE Holographic Microscope (ADMIRE-HM) the user will be “shrunk” a million times and virtually sent into a live 3D reconstruction of the real microscopic world contained in the glass slide. There he will find himself surrounded by micro-particles or moving cells that could be inspected from multiple directions and characterized by shape parameters (e.g. size, volume, aspect-ratio) or dynamical features (e.g. flagellar motility, sedimentation velocity, transport in a flow) obtained by means of simple and direct gestures.
The expected outcome of the project is to bring to a development stage TRL 6-7 a technology that could change the way we experience the microscopic world in basic research, biomedical applications and education.
Max ERC Funding
150 000 €
Duration
Start date: 2017-11-01, End date: 2019-04-30
Project acronym AfricanWomen
Project Women in Africa
Researcher (PI) catherine GUIRKINGER
Host Institution (HI) UNIVERSITE DE NAMUR ASBL
Call Details Starting Grant (StG), SH1, ERC-2017-STG
Summary Rates of domestic violence and the relative risk of premature death for women are higher in sub-Saharan Africa than in any other region. Yet we know remarkably little about the economic forces, incentives and constraints that drive discrimination against women in this region, making it hard to identify policy levers to address the problem. This project will help fill this gap.
I will investigate gender discrimination from two complementary perspectives. First, through the lens of economic history, I will investigate the forces driving trends in women’s relative well-being since slavery. To quantify the evolution of well-being of sub-Saharan women relative to men, I will use three types of historical data: anthropometric indicators (relative height), vital statistics (to compute numbers of missing women), and outcomes of formal and informal family law disputes. I will then investigate how major economic developments and changes in family laws differentially affected women’s welfare across ethnic groups with different norms on women’s roles and rights.
Second, using intra-household economic models, I will provide new insights into domestic violence and gender bias in access to crucial resources in present-day Africa. I will develop a new household model that incorporates gender identity and endogenous outside options to explore the relationship between women’s empowerment and the use of violence. Using the notion of strategic delegation, I will propose a new rationale for the separation of budgets often observed in African households and generate predictions of how improvements in women’s outside options affect welfare. Finally, with first hand data, I will investigate intra-household differences in nutrition and work effort in times of food shortage from the points of view of efficiency and equity. I will use activity trackers as an innovative means of collecting high quality data on work effort and thus overcome data limitations restricting the existing literature
Summary
Rates of domestic violence and the relative risk of premature death for women are higher in sub-Saharan Africa than in any other region. Yet we know remarkably little about the economic forces, incentives and constraints that drive discrimination against women in this region, making it hard to identify policy levers to address the problem. This project will help fill this gap.
I will investigate gender discrimination from two complementary perspectives. First, through the lens of economic history, I will investigate the forces driving trends in women’s relative well-being since slavery. To quantify the evolution of well-being of sub-Saharan women relative to men, I will use three types of historical data: anthropometric indicators (relative height), vital statistics (to compute numbers of missing women), and outcomes of formal and informal family law disputes. I will then investigate how major economic developments and changes in family laws differentially affected women’s welfare across ethnic groups with different norms on women’s roles and rights.
Second, using intra-household economic models, I will provide new insights into domestic violence and gender bias in access to crucial resources in present-day Africa. I will develop a new household model that incorporates gender identity and endogenous outside options to explore the relationship between women’s empowerment and the use of violence. Using the notion of strategic delegation, I will propose a new rationale for the separation of budgets often observed in African households and generate predictions of how improvements in women’s outside options affect welfare. Finally, with first hand data, I will investigate intra-household differences in nutrition and work effort in times of food shortage from the points of view of efficiency and equity. I will use activity trackers as an innovative means of collecting high quality data on work effort and thus overcome data limitations restricting the existing literature
Max ERC Funding
1 499 313 €
Duration
Start date: 2018-08-01, End date: 2023-07-31
Project acronym AGEnTh
Project Atomic Gauge and Entanglement Theories
Researcher (PI) Marcello DALMONTE
Host Institution (HI) SCUOLA INTERNAZIONALE SUPERIORE DI STUDI AVANZATI DI TRIESTE
Call Details Starting Grant (StG), PE2, ERC-2017-STG
Summary AGEnTh is an interdisciplinary proposal which aims at theoretically investigating atomic many-body systems (cold atoms and trapped ions) in close connection to concepts from quantum information, condensed matter, and high energy physics. The main goals of this programme are to:
I) Find to scalable schemes for the measurements of entanglement properties, and in particular entanglement spectra, by proposing a shifting paradigm to access entanglement focused on entanglement Hamiltonians and field theories instead of probing density matrices;
II) Show how atomic gauge theories (including dynamical gauge fields) are ideal candidates for the realization of long-sought, highly-entangled states of matter, in particular topological superconductors supporting parafermion edge modes, and novel classes of quantum spin liquids emerging from clustering;
III) Develop new implementation strategies for the realization of gauge symmetries of paramount importance, such as discrete and SU(N)xSU(2)xU(1) groups, and establish a theoretical framework for the understanding of atomic physics experiments within the light-from-chaos scenario pioneered in particle physics.
These objectives are at the cutting-edge of fundamental science, and represent a coherent effort aimed at underpinning unprecedented regimes of strongly interacting quantum matter by addressing the basic aspects of probing, many-body physics, and implementations. The results are expected to (i) build up and establish qualitatively new synergies between the aforementioned communities, and (ii) stimulate an intense theoretical and experimental activity focused on both entanglement and atomic gauge theories.
In order to achieve those, AGEnTh builds: (1) on my background working at the interface between atomic physics and quantum optics from one side, and many-body theory on the other, and (2) on exploratory studies which I carried out to mitigate the conceptual risks associated with its high-risk/high-gain goals.
Summary
AGEnTh is an interdisciplinary proposal which aims at theoretically investigating atomic many-body systems (cold atoms and trapped ions) in close connection to concepts from quantum information, condensed matter, and high energy physics. The main goals of this programme are to:
I) Find to scalable schemes for the measurements of entanglement properties, and in particular entanglement spectra, by proposing a shifting paradigm to access entanglement focused on entanglement Hamiltonians and field theories instead of probing density matrices;
II) Show how atomic gauge theories (including dynamical gauge fields) are ideal candidates for the realization of long-sought, highly-entangled states of matter, in particular topological superconductors supporting parafermion edge modes, and novel classes of quantum spin liquids emerging from clustering;
III) Develop new implementation strategies for the realization of gauge symmetries of paramount importance, such as discrete and SU(N)xSU(2)xU(1) groups, and establish a theoretical framework for the understanding of atomic physics experiments within the light-from-chaos scenario pioneered in particle physics.
These objectives are at the cutting-edge of fundamental science, and represent a coherent effort aimed at underpinning unprecedented regimes of strongly interacting quantum matter by addressing the basic aspects of probing, many-body physics, and implementations. The results are expected to (i) build up and establish qualitatively new synergies between the aforementioned communities, and (ii) stimulate an intense theoretical and experimental activity focused on both entanglement and atomic gauge theories.
In order to achieve those, AGEnTh builds: (1) on my background working at the interface between atomic physics and quantum optics from one side, and many-body theory on the other, and (2) on exploratory studies which I carried out to mitigate the conceptual risks associated with its high-risk/high-gain goals.
Max ERC Funding
1 055 317 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym AI-CU
Project Automated Improvement of Continuous User interfaces
Researcher (PI) BART GERBEN DE BOER
Host Institution (HI) VRIJE UNIVERSITEIT BRUSSEL
Call Details Proof of Concept (PoC), ERC-2017-PoC
Summary We propose to develop two tools for creating, in a systematic way, better user interfaces based on continuous, non-symbolic actions, such as swipes on a touch screen, 3-D motions with a hand-held device, or breath patterns in a user interface for otherwise paralyzed patients. The tools are based on two experimental/computational techniques developed in the ABACUS project: iterated learning and social coordination.
In iterated learning, sets of signals produced by one user are learned and reproduced by another user. The reproductions are then in turn learned by the next user. In the ABACUS project, it has been shown that this results in more learnable sets of signals. We propose to show how this can be applied to creating learnable and usable signals in a systematic way when design a user interface for a device that allows continuous actions.
In social coordination, it has been shown that signals become simplified and more abstract when people communicate over an extended period of time. The ABACUS project has developed techniques to detect and quantify this. We propose to show how these can be used for a user interface that adapts to its user. This will allow novice users to use more extended and therefore more learnable versions of actions, while the system adapts when users become more adept at using the interface and reduce their actions. Because the system is adaptive, the user is not constrained in how they do this.
Concretely, we propose to implement these two tools, investigate how they can be used optimally and advertise them to
interested companies, starting with ones with which we have contact, but extending our network at the start of the project through a business case development. In order to disseminate the results we propose to involve a user committee and organize one or more workshops.
Summary
We propose to develop two tools for creating, in a systematic way, better user interfaces based on continuous, non-symbolic actions, such as swipes on a touch screen, 3-D motions with a hand-held device, or breath patterns in a user interface for otherwise paralyzed patients. The tools are based on two experimental/computational techniques developed in the ABACUS project: iterated learning and social coordination.
In iterated learning, sets of signals produced by one user are learned and reproduced by another user. The reproductions are then in turn learned by the next user. In the ABACUS project, it has been shown that this results in more learnable sets of signals. We propose to show how this can be applied to creating learnable and usable signals in a systematic way when design a user interface for a device that allows continuous actions.
In social coordination, it has been shown that signals become simplified and more abstract when people communicate over an extended period of time. The ABACUS project has developed techniques to detect and quantify this. We propose to show how these can be used for a user interface that adapts to its user. This will allow novice users to use more extended and therefore more learnable versions of actions, while the system adapts when users become more adept at using the interface and reduce their actions. Because the system is adaptive, the user is not constrained in how they do this.
Concretely, we propose to implement these two tools, investigate how they can be used optimally and advertise them to
interested companies, starting with ones with which we have contact, but extending our network at the start of the project through a business case development. In order to disseminate the results we propose to involve a user committee and organize one or more workshops.
Max ERC Funding
150 000 €
Duration
Start date: 2018-06-01, End date: 2019-11-30
Project acronym AMDROMA
Project Algorithmic and Mechanism Design Research in Online MArkets
Researcher (PI) Stefano LEONARDI
Host Institution (HI) UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Online markets currently form an important share of the global economy. The Internet hosts classical markets (real-estate, stocks, e-commerce) as well allowing new markets with previously unknown features (web-based advertisement, viral marketing, digital goods, crowdsourcing, sharing economy). Algorithms play a central role in many decision processes involved in online markets. For example, algorithms run electronic auctions, trade stocks, adjusts prices dynamically, and harvest big data to provide economic information. Thus, it is of paramount importance to understand the algorithmic and mechanism design foundations of online markets.
The algorithmic research issues that we consider involve algorithmic mechanism design, online and approximation algorithms, modelling uncertainty in online market design, and large-scale data analysisonline and approximation algorithms, large-scale optimization and data mining. The aim of this research project is to combine these fields to consider research questions that are central for today's Internet economy. We plan to apply these techniques so as to solve fundamental algorithmic problems motivated by web-basedInternet advertisement, Internet market designsharing economy, and crowdsourcingonline labour marketplaces. While my planned research is focussedcentered on foundational work with rigorous design and analysis of in algorithms and mechanismsic design and analysis, it will also include as an important component empirical validation on large-scale real-life datasets.
Summary
Online markets currently form an important share of the global economy. The Internet hosts classical markets (real-estate, stocks, e-commerce) as well allowing new markets with previously unknown features (web-based advertisement, viral marketing, digital goods, crowdsourcing, sharing economy). Algorithms play a central role in many decision processes involved in online markets. For example, algorithms run electronic auctions, trade stocks, adjusts prices dynamically, and harvest big data to provide economic information. Thus, it is of paramount importance to understand the algorithmic and mechanism design foundations of online markets.
The algorithmic research issues that we consider involve algorithmic mechanism design, online and approximation algorithms, modelling uncertainty in online market design, and large-scale data analysisonline and approximation algorithms, large-scale optimization and data mining. The aim of this research project is to combine these fields to consider research questions that are central for today's Internet economy. We plan to apply these techniques so as to solve fundamental algorithmic problems motivated by web-basedInternet advertisement, Internet market designsharing economy, and crowdsourcingonline labour marketplaces. While my planned research is focussedcentered on foundational work with rigorous design and analysis of in algorithms and mechanismsic design and analysis, it will also include as an important component empirical validation on large-scale real-life datasets.
Max ERC Funding
1 780 150 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym ArsNova
Project European Ars Nova: Multilingual Poetry and Polyphonic Song in the Late Middle Ages
Researcher (PI) Maria Sofia LANNUTTI
Host Institution (HI) UNIVERSITA DEGLI STUDI DI FIRENZE
Call Details Advanced Grant (AdG), SH5, ERC-2017-ADG
Summary Dante Alighieri at the dawn of the 1300s, as well as Eustache Deschamps almost a century later, conceived poetry as music in itself. But what happens with poetry when it is involved in the complex architecture of polyphony? The aim of this project is to study for the first time the corpus of 14th- and early 15th-century poetry set to music by Ars Nova polyphonists (more than 1200 texts). This repertoire gathers different poetic and musical traditions, as shown by the multilingual anthologies copied during the last years of the Schism. The choice of this corpus is motivated by two primary goals: a) to offer a new interpretation of its meaning and function in the cultural and historical context, one that may be then applied to the rest of coeval European lyric poetry; b) to overcome current disciplinary divisions in order to generate a new methodological balance between the project’s two main fields of interest (Comparative Literature / Musicology). Most Ars Nova polyphonists were directly associated with religious institutions. In many texts, the language of courtly love expresses the values of caritas, the theological virtue that guides wise rulers and leads them to desire the common good. Thus, the poetic figure of the lover becomes a metaphor for the political man, and love poetry can be used as a device for diplomacy, as well as for personal and institutional propaganda. From this unprecedented point of view, the project will develop three research lines in response to the following questions: 1) How is the relationship between poetry and music, and how is the dialogue between the different poetic and musical traditions viewed in relation to each context of production? 2) To what extent does Ars Nova poetry take part in the ‘soft power’ strategies exercised by the entire European political class of the time? 3) Is there a connection between the multilingualism of the manuscript tradition and the perception of the Ars Nova as a European, intercultural repertoire?
Summary
Dante Alighieri at the dawn of the 1300s, as well as Eustache Deschamps almost a century later, conceived poetry as music in itself. But what happens with poetry when it is involved in the complex architecture of polyphony? The aim of this project is to study for the first time the corpus of 14th- and early 15th-century poetry set to music by Ars Nova polyphonists (more than 1200 texts). This repertoire gathers different poetic and musical traditions, as shown by the multilingual anthologies copied during the last years of the Schism. The choice of this corpus is motivated by two primary goals: a) to offer a new interpretation of its meaning and function in the cultural and historical context, one that may be then applied to the rest of coeval European lyric poetry; b) to overcome current disciplinary divisions in order to generate a new methodological balance between the project’s two main fields of interest (Comparative Literature / Musicology). Most Ars Nova polyphonists were directly associated with religious institutions. In many texts, the language of courtly love expresses the values of caritas, the theological virtue that guides wise rulers and leads them to desire the common good. Thus, the poetic figure of the lover becomes a metaphor for the political man, and love poetry can be used as a device for diplomacy, as well as for personal and institutional propaganda. From this unprecedented point of view, the project will develop three research lines in response to the following questions: 1) How is the relationship between poetry and music, and how is the dialogue between the different poetic and musical traditions viewed in relation to each context of production? 2) To what extent does Ars Nova poetry take part in the ‘soft power’ strategies exercised by the entire European political class of the time? 3) Is there a connection between the multilingualism of the manuscript tradition and the perception of the Ars Nova as a European, intercultural repertoire?
Max ERC Funding
2 193 375 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym ASTHMACRYSTALCLEAR
Project Role of protein crystallization in type 2 immunity and asthma
Researcher (PI) Bart LAMBRECHT
Host Institution (HI) VIB
Call Details Advanced Grant (AdG), LS6, ERC-2017-ADG
Summary Spontaneous protein crystallization is a rare event in biology. Eosinophilic inflammation such as seen in the airways in asthma, chronic rhinosinusitis and helminth infection is however accompanied by accumulation of large amounts of extracellular Charcot-Leyden crystals. These are made of Galectin-10, a protein of unknown function produced by eosinophils, hallmark cells of type 2 immunity. In mice, eosinophilic inflammation is also accompanied by protein crystal build up, composed of the chitinase-like proteins Ym1 and Ym2, produced by alternatively activated macrophages. Here we challenge the current view that these crystals are just markers of eosinophil demise or macrophages activation. We hypothesize that protein crystallization serves an active role in immunoregulation of type 2 immunity. On the one hand, crystallization might turn a harmless protein into a danger signal. On the other hand, crystallization might sequester and eliminate the physiological function of soluble Galectin-10 and Ym1, or prolong it via slow release elution. For full understanding, we therefore need to understand the function of the proteins in a soluble and crystalline state. Our program at the frontline of immunology, molecular structural biology and clinical science combines innovative tool creation and integrative research to investigate the structure, function, and physiology of galectin-10 and related protein crystals. We chose to study asthma as the crystallizing proteins are abundantly present in human and murine disease. There is still a large medical need for novel therapies that could benefit patients with chronic steroid-resistant disease, and are alternatives to eosinophil-depleting antibodies whose long term effects are unknown.
Summary
Spontaneous protein crystallization is a rare event in biology. Eosinophilic inflammation such as seen in the airways in asthma, chronic rhinosinusitis and helminth infection is however accompanied by accumulation of large amounts of extracellular Charcot-Leyden crystals. These are made of Galectin-10, a protein of unknown function produced by eosinophils, hallmark cells of type 2 immunity. In mice, eosinophilic inflammation is also accompanied by protein crystal build up, composed of the chitinase-like proteins Ym1 and Ym2, produced by alternatively activated macrophages. Here we challenge the current view that these crystals are just markers of eosinophil demise or macrophages activation. We hypothesize that protein crystallization serves an active role in immunoregulation of type 2 immunity. On the one hand, crystallization might turn a harmless protein into a danger signal. On the other hand, crystallization might sequester and eliminate the physiological function of soluble Galectin-10 and Ym1, or prolong it via slow release elution. For full understanding, we therefore need to understand the function of the proteins in a soluble and crystalline state. Our program at the frontline of immunology, molecular structural biology and clinical science combines innovative tool creation and integrative research to investigate the structure, function, and physiology of galectin-10 and related protein crystals. We chose to study asthma as the crystallizing proteins are abundantly present in human and murine disease. There is still a large medical need for novel therapies that could benefit patients with chronic steroid-resistant disease, and are alternatives to eosinophil-depleting antibodies whose long term effects are unknown.
Max ERC Funding
2 499 846 €
Duration
Start date: 2018-08-01, End date: 2023-07-31
Project acronym AUTISMS
Project Decomposing Heterogeneity in Autism Spectrum Disorders
Researcher (PI) Michael LOMBARDO
Host Institution (HI) FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
Call Details Starting Grant (StG), SH4, ERC-2017-STG
Summary Autism spectrum disorders (ASD) affect 1-2% of the population and are a major public health issue. Heterogeneity between affected ASD individuals is substantial both at clinical and etiological levels, thus warranting the idea that we should begin characterizing the ASD population as multiple kinds of ‘autisms’. Without an advanced understanding of how heterogeneity manifests in ASD, it is likely that we will not make pronounced progress towards translational research goals that can have real impact on patient’s lives. This research program is focused on decomposing heterogeneity in ASD at multiple levels of analysis. Using multiple ‘big data’ resources that are both ‘broad’ (large sample size) and ‘deep’ (multiple levels of analysis measured within each individual), I will examine how known variables such as sex, early language development, early social preferences, and early intervention treatment response may be important stratification variables that differentiate ASD subgroups at phenotypic, neural systems/circuits, and genomic levels of analysis. In addition to examining known stratification variables, this research program will engage in data-driven discovery via application of advanced unsupervised computational techniques that can highlight novel multivariate distinctions in the data that signal important ASD subgroups. These data-driven approaches may hold promise for discovering novel ASD subgroups at biological and phenotypic levels of analysis that may be valuable for prioritization in future work developing personalized assessment, monitoring, and treatment strategies for subsets of the ASD population. By enhancing the precision of our understanding about multiple subtypes of ASD this work will help accelerate progress towards the ideals of personalized medicine and help to reduce the burden of ASD on individuals, families, and society.
Summary
Autism spectrum disorders (ASD) affect 1-2% of the population and are a major public health issue. Heterogeneity between affected ASD individuals is substantial both at clinical and etiological levels, thus warranting the idea that we should begin characterizing the ASD population as multiple kinds of ‘autisms’. Without an advanced understanding of how heterogeneity manifests in ASD, it is likely that we will not make pronounced progress towards translational research goals that can have real impact on patient’s lives. This research program is focused on decomposing heterogeneity in ASD at multiple levels of analysis. Using multiple ‘big data’ resources that are both ‘broad’ (large sample size) and ‘deep’ (multiple levels of analysis measured within each individual), I will examine how known variables such as sex, early language development, early social preferences, and early intervention treatment response may be important stratification variables that differentiate ASD subgroups at phenotypic, neural systems/circuits, and genomic levels of analysis. In addition to examining known stratification variables, this research program will engage in data-driven discovery via application of advanced unsupervised computational techniques that can highlight novel multivariate distinctions in the data that signal important ASD subgroups. These data-driven approaches may hold promise for discovering novel ASD subgroups at biological and phenotypic levels of analysis that may be valuable for prioritization in future work developing personalized assessment, monitoring, and treatment strategies for subsets of the ASD population. By enhancing the precision of our understanding about multiple subtypes of ASD this work will help accelerate progress towards the ideals of personalized medicine and help to reduce the burden of ASD on individuals, families, and society.
Max ERC Funding
1 499 444 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym BACKUP
Project Unveiling the relationship between brain connectivity and function by integrated photonics
Researcher (PI) Lorenzo PAVESI
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TRENTO
Call Details Advanced Grant (AdG), PE7, ERC-2017-ADG
Summary I will address the fundamental question of which is the role of neuron activity and plasticity in information elaboration and storage in the brain. I, together with an interdisciplinary team, will develop a hybrid neuro-morphic computing platform. Integrated photonic circuits will be interfaced to both electronic circuits and neuronal circuits (in vitro experiments) to emulate brain functions and develop schemes able to supplement (backup) neuronal functions. The photonic network is based on massive reconfigurable matrices of nonlinear nodes formed by microring resonators, which enter in regime of self-pulsing and chaos by positive optical feedback. These networks resemble human brain. I will push this analogy further by interfacing the photonic network with neurons making hybrid network. By using optogenetics, I will control the synaptic strengthen-ing and the neuron activity. Deep learning algorithms will model the biological network functionality, initial-ly within a separate artificial network and, then, in an integrated hybrid artificial-biological network.
My project aims at:
1. Developing a photonic integrated reservoir-computing network (RCN);
2. Developing dynamic memories in photonic integrated circuits using RCN;
3. Developing hybrid interfaces between a neuronal network and a photonic integrated circuit;
4. Developing a hybrid electronic, photonic and biological network that computes jointly;
5. Addressing neuronal network activity by photonic RCN to simulate in vitro memory storage and retrieval;
6. Elaborating the signal from RCN and neuronal circuits in order to cope with plastic changes in pathologi-cal brain conditions such as amnesia and epilepsy.
The long-term vision is that hybrid neuromorphic photonic networks will (a) clarify the way brain thinks, (b) compute beyond von Neumann, and (c) control and supplement specific neuronal functions.
Summary
I will address the fundamental question of which is the role of neuron activity and plasticity in information elaboration and storage in the brain. I, together with an interdisciplinary team, will develop a hybrid neuro-morphic computing platform. Integrated photonic circuits will be interfaced to both electronic circuits and neuronal circuits (in vitro experiments) to emulate brain functions and develop schemes able to supplement (backup) neuronal functions. The photonic network is based on massive reconfigurable matrices of nonlinear nodes formed by microring resonators, which enter in regime of self-pulsing and chaos by positive optical feedback. These networks resemble human brain. I will push this analogy further by interfacing the photonic network with neurons making hybrid network. By using optogenetics, I will control the synaptic strengthen-ing and the neuron activity. Deep learning algorithms will model the biological network functionality, initial-ly within a separate artificial network and, then, in an integrated hybrid artificial-biological network.
My project aims at:
1. Developing a photonic integrated reservoir-computing network (RCN);
2. Developing dynamic memories in photonic integrated circuits using RCN;
3. Developing hybrid interfaces between a neuronal network and a photonic integrated circuit;
4. Developing a hybrid electronic, photonic and biological network that computes jointly;
5. Addressing neuronal network activity by photonic RCN to simulate in vitro memory storage and retrieval;
6. Elaborating the signal from RCN and neuronal circuits in order to cope with plastic changes in pathologi-cal brain conditions such as amnesia and epilepsy.
The long-term vision is that hybrid neuromorphic photonic networks will (a) clarify the way brain thinks, (b) compute beyond von Neumann, and (c) control and supplement specific neuronal functions.
Max ERC Funding
2 499 825 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym BIORECAR
Project Direct cell reprogramming therapy in myocardial regeneration through an engineered multifunctional platform integrating biochemical instructive cues
Researcher (PI) Valeria CHIONO
Host Institution (HI) POLITECNICO DI TORINO
Call Details Consolidator Grant (CoG), PE8, ERC-2017-COG
Summary In BIORECAR I will develop a new breakthrough multifunctional biomaterial-based platform for myocardial regeneration after myocardial infarction, provided with biochemical cues able to enhance the direct reprogramming of human cardiac fibroblasts into functional cardiomyocytes.
My expertise in bioartificial materials and biomimetic scaffolds and the versatile chemistry of polyurethanes will be the key elements to achieve a significant knowledge and technological advancement in cell reprogramming therapy, opening the way to the future translation of the therapy into the clinics.
I will implement this advanced approach through the design of a novel 3D in vitro tissue-engineered model of human cardiac fibrotic tissue, as a tool for testing and validation, to maximise research efforts and reduce animal tests.
I will adapt novel nanomedicine approaches I have recently developed for drug release to design innovative cell-friendly and efficient polyurethane nanoparticles for targeted reprogramming of cardiac fibroblasts.
I will design an injectable bioartificial hydrogel based on a blend of a thermosensitive polyurethane and a natural component selected among a novel cell-secreted natural polymer mixture (“biomatrix”) recapitulating the complexity of cardiac extracellular matrix or one of its main protein constituents. Such multifunctional hydrogel will deliver in situ agents stimulating recruitment of cardiac fibroblasts together with the nanoparticles loaded with reprogramming therapeutics, and will provide biochemical signalling to stimulate efficient conversion of fibroblasts into mature cardiomyocytes.
First-in-field biomaterials-based innovations introduced by BIORECAR will enable more effective regeneration of functional myocardial tissue respect to state-of-the art approaches. BIORECAR innovation is multidisciplinary in nature and will be accelerated towards future clinical translation through my clinical, scientific and industrial collaborations.
Summary
In BIORECAR I will develop a new breakthrough multifunctional biomaterial-based platform for myocardial regeneration after myocardial infarction, provided with biochemical cues able to enhance the direct reprogramming of human cardiac fibroblasts into functional cardiomyocytes.
My expertise in bioartificial materials and biomimetic scaffolds and the versatile chemistry of polyurethanes will be the key elements to achieve a significant knowledge and technological advancement in cell reprogramming therapy, opening the way to the future translation of the therapy into the clinics.
I will implement this advanced approach through the design of a novel 3D in vitro tissue-engineered model of human cardiac fibrotic tissue, as a tool for testing and validation, to maximise research efforts and reduce animal tests.
I will adapt novel nanomedicine approaches I have recently developed for drug release to design innovative cell-friendly and efficient polyurethane nanoparticles for targeted reprogramming of cardiac fibroblasts.
I will design an injectable bioartificial hydrogel based on a blend of a thermosensitive polyurethane and a natural component selected among a novel cell-secreted natural polymer mixture (“biomatrix”) recapitulating the complexity of cardiac extracellular matrix or one of its main protein constituents. Such multifunctional hydrogel will deliver in situ agents stimulating recruitment of cardiac fibroblasts together with the nanoparticles loaded with reprogramming therapeutics, and will provide biochemical signalling to stimulate efficient conversion of fibroblasts into mature cardiomyocytes.
First-in-field biomaterials-based innovations introduced by BIORECAR will enable more effective regeneration of functional myocardial tissue respect to state-of-the art approaches. BIORECAR innovation is multidisciplinary in nature and will be accelerated towards future clinical translation through my clinical, scientific and industrial collaborations.
Max ERC Funding
2 000 000 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym Bits2Cosmology
Project Time-domain Gibbs sampling: From bits to inflationary gravitational waves
Researcher (PI) Hans Kristian ERIKSEN
Host Institution (HI) UNIVERSITETET I OSLO
Call Details Consolidator Grant (CoG), PE9, ERC-2017-COG
Summary The detection of primordial gravity waves created during the Big Bang ranks among the greatest potential intellectual achievements in modern science. During the last few decades, the instrumental progress necessary to achieve this has been nothing short of breathtaking, and we today are able to measure the microwave sky with better than one-in-a-million precision. However, from the latest ultra-sensitive experiments such as BICEP2 and Planck, it is clear that instrumental sensitivity alone will not be sufficient to make a robust detection of gravitational waves. Contamination in the form of astrophysical radiation from the Milky Way, for instance thermal dust and synchrotron radiation, obscures the cosmological signal by orders of magnitude. Even more critically, though, are second-order interactions between this radiation and the instrument characterization itself that lead to a highly non-linear and complicated problem.
I propose a ground-breaking solution to this problem that allows for joint estimation of cosmological parameters, astrophysical components, and instrument specifications. The engine of this method is called Gibbs sampling, which I have already applied extremely successfully to basic CMB component separation. The new and ciritical step is to apply this method to raw time-ordered observations observed directly by the instrument, as opposed to pre-processed frequency maps. While representing a ~100-fold increase in input data volume, this step is unavoidable in order to break through the current foreground-induced systematics floor. I will apply this method to the best currently available and future data sets (WMAP, Planck, SPIDER and LiteBIRD), and thereby derive the world's tightest constraint on the amplitude of inflationary gravitational waves. Additionally, the resulting ancillary science in the form of robust cosmological parameters and astrophysical component maps will represent the state-of-the-art in observational cosmology in years to come.
Summary
The detection of primordial gravity waves created during the Big Bang ranks among the greatest potential intellectual achievements in modern science. During the last few decades, the instrumental progress necessary to achieve this has been nothing short of breathtaking, and we today are able to measure the microwave sky with better than one-in-a-million precision. However, from the latest ultra-sensitive experiments such as BICEP2 and Planck, it is clear that instrumental sensitivity alone will not be sufficient to make a robust detection of gravitational waves. Contamination in the form of astrophysical radiation from the Milky Way, for instance thermal dust and synchrotron radiation, obscures the cosmological signal by orders of magnitude. Even more critically, though, are second-order interactions between this radiation and the instrument characterization itself that lead to a highly non-linear and complicated problem.
I propose a ground-breaking solution to this problem that allows for joint estimation of cosmological parameters, astrophysical components, and instrument specifications. The engine of this method is called Gibbs sampling, which I have already applied extremely successfully to basic CMB component separation. The new and ciritical step is to apply this method to raw time-ordered observations observed directly by the instrument, as opposed to pre-processed frequency maps. While representing a ~100-fold increase in input data volume, this step is unavoidable in order to break through the current foreground-induced systematics floor. I will apply this method to the best currently available and future data sets (WMAP, Planck, SPIDER and LiteBIRD), and thereby derive the world's tightest constraint on the amplitude of inflationary gravitational waves. Additionally, the resulting ancillary science in the form of robust cosmological parameters and astrophysical component maps will represent the state-of-the-art in observational cosmology in years to come.
Max ERC Funding
1 999 205 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym CABUM
Project An investigation of the mechanisms at the interaction between cavitation bubbles and contaminants
Researcher (PI) Matevz DULAR
Host Institution (HI) UNIVERZA V LJUBLJANI
Call Details Consolidator Grant (CoG), PE8, ERC-2017-COG
Summary A sudden decrease in pressure triggers the formation of vapour and gas bubbles inside a liquid medium (also called cavitation). This leads to many (key) engineering problems: material loss, noise and vibration of hydraulic machinery. On the other hand, cavitation is a potentially a useful phenomenon: the extreme conditions are increasingly used for a wide variety of applications such as surface cleaning, enhanced chemistry, and waste water treatment (bacteria eradication and virus inactivation).
Despite this significant progress a large gap persists between the understanding of the mechanisms that contribute to the effects of cavitation and its application. Although engineers are already commercializing devices that employ cavitation, we are still not able to answer the fundamental question: What precisely are the mechanisms how bubbles can clean, disinfect, kill bacteria and enhance chemical activity? The overall objective of the project is to understand and determine the fundamental physics of the interaction of cavitation bubbles with different contaminants. To address this issue, the CABUM project will investigate the physical background of cavitation from physical, biological and engineering perspective on three complexity scales: i) on single bubble level, ii) on organised and iii) on random bubble clusters, producing a progressive multidisciplinary synergetic effect.
The proposed synergetic approach builds on the PI's preliminary research and employs novel experimental and numerical methodologies, some of which have been developed by the PI and his research group, to explore the physics of cavitation behaviour in interaction with bacteria and viruses.
Understanding the fundamental physical background of cavitation in interaction with contaminants will have a ground-breaking implications in various scientific fields (engineering, chemistry and biology) and will, in the future, enable the exploitation of cavitation in water and soil treatment processes.
Summary
A sudden decrease in pressure triggers the formation of vapour and gas bubbles inside a liquid medium (also called cavitation). This leads to many (key) engineering problems: material loss, noise and vibration of hydraulic machinery. On the other hand, cavitation is a potentially a useful phenomenon: the extreme conditions are increasingly used for a wide variety of applications such as surface cleaning, enhanced chemistry, and waste water treatment (bacteria eradication and virus inactivation).
Despite this significant progress a large gap persists between the understanding of the mechanisms that contribute to the effects of cavitation and its application. Although engineers are already commercializing devices that employ cavitation, we are still not able to answer the fundamental question: What precisely are the mechanisms how bubbles can clean, disinfect, kill bacteria and enhance chemical activity? The overall objective of the project is to understand and determine the fundamental physics of the interaction of cavitation bubbles with different contaminants. To address this issue, the CABUM project will investigate the physical background of cavitation from physical, biological and engineering perspective on three complexity scales: i) on single bubble level, ii) on organised and iii) on random bubble clusters, producing a progressive multidisciplinary synergetic effect.
The proposed synergetic approach builds on the PI's preliminary research and employs novel experimental and numerical methodologies, some of which have been developed by the PI and his research group, to explore the physics of cavitation behaviour in interaction with bacteria and viruses.
Understanding the fundamental physical background of cavitation in interaction with contaminants will have a ground-breaking implications in various scientific fields (engineering, chemistry and biology) and will, in the future, enable the exploitation of cavitation in water and soil treatment processes.
Max ERC Funding
1 904 565 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym CALCULUS
Project Commonsense and Anticipation enriched Learning of Continuous representations sUpporting Language UnderStanding
Researcher (PI) Marie-Francine MOENS
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Natural language understanding (NLU) by the machine is of large scientific, economic and social value. Humans perform the NLU task in an efficient way by relying on their capability to imagine or anticipate situations. They engage commonsense and world knowledge that is often acquired through perceptual experiences to make explicit what is left implicit in language. Inspired by these characteristics CALCULUS will design, implement and evaluate innovative paradigms supporting NLU, where it will combine old but powerful ideas for language understanding from the early days of artificial intelligence with new approaches from machine learning. The project focuses on the effective learning of anticipatory, continuous, non-symbolic representations of event frames and narrative structures of events that are trained on language and visual data. The grammatical structure of language is grounded in the geometric structure of visual data while embodying aspects of commonsense and world knowledge. The reusable representations are evaluated in a selection of NLU tasks requiring efficient real-time retrieval of the representations and parsing of the targeted written texts. Finally, we will evaluate the inference potential of the anticipatory representations in situations not seen in the training data and when inferring spatial and temporal information in metric real world spaces that is not mentioned in the processed language. The machine learning methods focus on learning latent variable models relying on Bayesian probabilistic models and neural networks and focus on settings with limited training data that are manually annotated. The best models will be integrated in a demonstrator that translates the language of stories to events happening in a 3-D virtual world. The PI has interdisciplinary expertise in natural language processing, joint processing of language and visual data, information retrieval and machine learning needed for the successful realization of the project.
Summary
Natural language understanding (NLU) by the machine is of large scientific, economic and social value. Humans perform the NLU task in an efficient way by relying on their capability to imagine or anticipate situations. They engage commonsense and world knowledge that is often acquired through perceptual experiences to make explicit what is left implicit in language. Inspired by these characteristics CALCULUS will design, implement and evaluate innovative paradigms supporting NLU, where it will combine old but powerful ideas for language understanding from the early days of artificial intelligence with new approaches from machine learning. The project focuses on the effective learning of anticipatory, continuous, non-symbolic representations of event frames and narrative structures of events that are trained on language and visual data. The grammatical structure of language is grounded in the geometric structure of visual data while embodying aspects of commonsense and world knowledge. The reusable representations are evaluated in a selection of NLU tasks requiring efficient real-time retrieval of the representations and parsing of the targeted written texts. Finally, we will evaluate the inference potential of the anticipatory representations in situations not seen in the training data and when inferring spatial and temporal information in metric real world spaces that is not mentioned in the processed language. The machine learning methods focus on learning latent variable models relying on Bayesian probabilistic models and neural networks and focus on settings with limited training data that are manually annotated. The best models will be integrated in a demonstrator that translates the language of stories to events happening in a 3-D virtual world. The PI has interdisciplinary expertise in natural language processing, joint processing of language and visual data, information retrieval and machine learning needed for the successful realization of the project.
Max ERC Funding
2 227 500 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym CANITEST
Project CANITEST: Proof-Of-Concept of a PCR test designed to identify the dogs carrying the more virulent strains of Capnocytophaga canimorsus
Researcher (PI) Guy Cornélis
Host Institution (HI) UNIVERSITE DE NAMUR ASBL
Call Details Proof of Concept (PoC), ERC-2017-PoC
Summary The idea to be taken as proof of concept is drawn from the ERC Advanced grant N°293605-CAPCAN (2012-2016). This grant aimed at understanding the molecular and genetic bases of the dramatic human infections caused by Capnocytophaga canimorsus. One of the questions that we addressed in the frame of CAPCAN is why are there so few cases of human infections while so many dogs carry C. canimorsus? In other words, are all C. canimorsus strains equally dangerous and, if not, could we prevent the disease by detecting the dogs carrying the more dangerous strains? During CAPCAN, among others, we showed that C. canimorsus is endowed with a capsular polysaccharide (CPS) and its assembly pathway was characterized [1]. We also showed that the CPS of 25/25 strains isolated from human infections present a limited variability, with 3 dominant capsular serovars. In contrast, only 4 out of 52 C. canimorsus isolated from dog mouths did belong to these three serovars [2]. This implies that a small minority of dog-hosted C. canimorsus strains are virulent for humans than most strains and that these strains can be identified by capsular serotyping. We also set up a PCR test to achieve this capsular serotyping [2]. The proposal to be taken to proof of concept is to market the PCR test designed to identify the dogs carrying the more virulent strains of C. canimorsus.
Summary
The idea to be taken as proof of concept is drawn from the ERC Advanced grant N°293605-CAPCAN (2012-2016). This grant aimed at understanding the molecular and genetic bases of the dramatic human infections caused by Capnocytophaga canimorsus. One of the questions that we addressed in the frame of CAPCAN is why are there so few cases of human infections while so many dogs carry C. canimorsus? In other words, are all C. canimorsus strains equally dangerous and, if not, could we prevent the disease by detecting the dogs carrying the more dangerous strains? During CAPCAN, among others, we showed that C. canimorsus is endowed with a capsular polysaccharide (CPS) and its assembly pathway was characterized [1]. We also showed that the CPS of 25/25 strains isolated from human infections present a limited variability, with 3 dominant capsular serovars. In contrast, only 4 out of 52 C. canimorsus isolated from dog mouths did belong to these three serovars [2]. This implies that a small minority of dog-hosted C. canimorsus strains are virulent for humans than most strains and that these strains can be identified by capsular serotyping. We also set up a PCR test to achieve this capsular serotyping [2]. The proposal to be taken to proof of concept is to market the PCR test designed to identify the dogs carrying the more virulent strains of C. canimorsus.
Max ERC Funding
150 000 €
Duration
Start date: 2017-12-01, End date: 2019-05-31
Project acronym CellKarma
Project Dissecting the regulatory logic of cell fate reprogramming through integrative and single cell genomics
Researcher (PI) Davide CACCHIARELLI
Host Institution (HI) FONDAZIONE TELETHON
Call Details Starting Grant (StG), LS2, ERC-2017-STG
Summary The concept that any cell type, upon delivery of the right “cocktail” of transcription factors, can acquire an identity that otherwise it would never achieve, revolutionized the way we approach the study of developmental biology. In light of this, the discovery of induced pluripotent stem cells (IPSCs) and cell fate conversion approaches stimulated new research directions into human regenerative biology. However, the chance to successfully develop patient-tailored therapies is still very limited because reprogramming technologies are applied without a comprehensive understanding of the molecular processes involved.
Here, I propose a multifaceted approach that combines a wide range of cutting-edge integrative genomic strategies to significantly advance our understanding of the regulatory logic driving cell fate decisions during human reprogramming to pluripotency.
To this end, I will utilize single cell transcriptomics to isolate reprogramming intermediates, reconstruct their lineage relationships and define transcriptional regulators responsible for the observed transitions (AIM 1). Then, I will dissect the rules by which transcription factors modulate the activity of promoters and enhancer regions during reprogramming transitions, by applying synthetic biology and genome editing approaches (AIM 2). Then, I will adopt an alternative approach to identify reprogramming modulators by the analysis of reprogramming-induced mutagenesis events (AIM 3). Finally, I will explore my findings in multiple primary reprogramming approaches to pluripotency, with the ultimate goal of improving the quality of IPSC derivation (Aim 4).
In summary, this project will expose novel determinants and yet unidentified molecular barriers of reprogramming to pluripotency and will be essential to unlock the full potential of reprogramming technologies for shaping cellular identity in vitro and to address pressing challenges of regenerative medicine.
Summary
The concept that any cell type, upon delivery of the right “cocktail” of transcription factors, can acquire an identity that otherwise it would never achieve, revolutionized the way we approach the study of developmental biology. In light of this, the discovery of induced pluripotent stem cells (IPSCs) and cell fate conversion approaches stimulated new research directions into human regenerative biology. However, the chance to successfully develop patient-tailored therapies is still very limited because reprogramming technologies are applied without a comprehensive understanding of the molecular processes involved.
Here, I propose a multifaceted approach that combines a wide range of cutting-edge integrative genomic strategies to significantly advance our understanding of the regulatory logic driving cell fate decisions during human reprogramming to pluripotency.
To this end, I will utilize single cell transcriptomics to isolate reprogramming intermediates, reconstruct their lineage relationships and define transcriptional regulators responsible for the observed transitions (AIM 1). Then, I will dissect the rules by which transcription factors modulate the activity of promoters and enhancer regions during reprogramming transitions, by applying synthetic biology and genome editing approaches (AIM 2). Then, I will adopt an alternative approach to identify reprogramming modulators by the analysis of reprogramming-induced mutagenesis events (AIM 3). Finally, I will explore my findings in multiple primary reprogramming approaches to pluripotency, with the ultimate goal of improving the quality of IPSC derivation (Aim 4).
In summary, this project will expose novel determinants and yet unidentified molecular barriers of reprogramming to pluripotency and will be essential to unlock the full potential of reprogramming technologies for shaping cellular identity in vitro and to address pressing challenges of regenerative medicine.
Max ERC Funding
1 497 250 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym CIVICS
Project Criminality, Victimization and Social Interactions
Researcher (PI) Katrine Vellesen LOKEN
Host Institution (HI) NORGES HANDELSHOYSKOLE
Call Details Starting Grant (StG), SH1, ERC-2017-STG
Summary A large social science literature tries to describe and understand the causes and consequences of crime, usually focusing on individuals’ criminal activity in isolation. The ambitious aim of this research project is to establish a broader perspective of crime that takes into account the social context in which it takes place. The findings will inform policymakers on how to better use funds both for crime prevention and the rehabilitation of incarcerated criminals.
Criminal activity is often a group phenomenon, yet little is known about how criminal networks form and what can be done to break them up or prevent them from forming in the first place. Overlooking victims of crime and their relationships to criminals has led to an incomplete and distorted view of crime and its individual and social costs. While a better understanding of these social interactions is crucial for designing more effective anti-crime policy, existing research in criminology, sociology and economics has struggled to identify causal effects due to data limitations and difficult statistical identification issues.
This project will push the research frontier by combining register datasets that have never been merged before, and by using several state-of-the-art statistical methods to estimate causal effects related to criminal peer groups and their victims. More specifically, we aim to do the following:
-Use recent advances in network modelling to describe the structure and density of various criminal networks and study network dynamics following the arrest/incarceration or death of a central player in a network.
-Obtain a more accurate measure of the societal costs of crime, including actual measures for lost earnings and physical and mental health problems, following victims and their offenders both before and after a crime takes place.
-Conduct a randomized controlled trial within a prison system to better understand how current rehabilitation programs affect criminal and victim networks.
Summary
A large social science literature tries to describe and understand the causes and consequences of crime, usually focusing on individuals’ criminal activity in isolation. The ambitious aim of this research project is to establish a broader perspective of crime that takes into account the social context in which it takes place. The findings will inform policymakers on how to better use funds both for crime prevention and the rehabilitation of incarcerated criminals.
Criminal activity is often a group phenomenon, yet little is known about how criminal networks form and what can be done to break them up or prevent them from forming in the first place. Overlooking victims of crime and their relationships to criminals has led to an incomplete and distorted view of crime and its individual and social costs. While a better understanding of these social interactions is crucial for designing more effective anti-crime policy, existing research in criminology, sociology and economics has struggled to identify causal effects due to data limitations and difficult statistical identification issues.
This project will push the research frontier by combining register datasets that have never been merged before, and by using several state-of-the-art statistical methods to estimate causal effects related to criminal peer groups and their victims. More specifically, we aim to do the following:
-Use recent advances in network modelling to describe the structure and density of various criminal networks and study network dynamics following the arrest/incarceration or death of a central player in a network.
-Obtain a more accurate measure of the societal costs of crime, including actual measures for lost earnings and physical and mental health problems, following victims and their offenders both before and after a crime takes place.
-Conduct a randomized controlled trial within a prison system to better understand how current rehabilitation programs affect criminal and victim networks.
Max ERC Funding
1 187 046 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym CO2LIFE
Project BIOMIMETIC FIXATION OF CO2 AS SOURCE OF SALTS AND GLUCOSE
Researcher (PI) Patricia LUIS ALCONERO
Host Institution (HI) UNIVERSITE CATHOLIQUE DE LOUVAIN
Call Details Starting Grant (StG), PE8, ERC-2017-STG
Summary The continued increase in the atmospheric concentration of CO2 due to anthropogenic emissions is leading to significant changes in climate, with the industry accounting for one-third of all the energy used globally and for almost 40% of worldwide CO2 emissions. Fast actions are required to decrease the concentration of this greenhouse gas in the atmosphere, value that has currently reaching 400 ppm. Among the technological possibilities that are on the table to reduce CO2 emissions, carbon capture and storage into geological deposits is one of the main strategies that is being applied. However, the final objective of this strategy is to remove CO2 without considering the enormous potential of this molecule as a source of carbon for the production of valuable compounds. Nature has developed an effective and equilibrated mechanism to concentrate CO2 and fixate the inorganic carbon into organic material (e.g., glucose) by means of enzymatic action. Mimicking Nature and take advantage of millions of years of evolution should be considered as a basic starting point in the development of smart and highly effective processes. In addition, the use of amino-acid salts for CO2 capture is envisaged as a potential approach to recover CO2 in the form of (bi)carbonates.
The project CO2LIFE presents the overall objective of developing a chemical process that converts carbon dioxide into valuable molecules using membrane technology. The strategy followed in this project is two-fold: i) CO2 membrane-based absorption-crystallization process on basis of using amino-acid salts, and ii) CO2 conversion into glucose or salts by using enzymes as catalysts supported on or retained by membranes. The final product, i.e. (bi)carbonates or glucose, has a large interest in the (bio)chemical industry, thus, new CO2 emissions are avoided and the carbon cycle is closed. This project will provide a technological solution at industrial scale for the removal and reutilization of CO2.
Summary
The continued increase in the atmospheric concentration of CO2 due to anthropogenic emissions is leading to significant changes in climate, with the industry accounting for one-third of all the energy used globally and for almost 40% of worldwide CO2 emissions. Fast actions are required to decrease the concentration of this greenhouse gas in the atmosphere, value that has currently reaching 400 ppm. Among the technological possibilities that are on the table to reduce CO2 emissions, carbon capture and storage into geological deposits is one of the main strategies that is being applied. However, the final objective of this strategy is to remove CO2 without considering the enormous potential of this molecule as a source of carbon for the production of valuable compounds. Nature has developed an effective and equilibrated mechanism to concentrate CO2 and fixate the inorganic carbon into organic material (e.g., glucose) by means of enzymatic action. Mimicking Nature and take advantage of millions of years of evolution should be considered as a basic starting point in the development of smart and highly effective processes. In addition, the use of amino-acid salts for CO2 capture is envisaged as a potential approach to recover CO2 in the form of (bi)carbonates.
The project CO2LIFE presents the overall objective of developing a chemical process that converts carbon dioxide into valuable molecules using membrane technology. The strategy followed in this project is two-fold: i) CO2 membrane-based absorption-crystallization process on basis of using amino-acid salts, and ii) CO2 conversion into glucose or salts by using enzymes as catalysts supported on or retained by membranes. The final product, i.e. (bi)carbonates or glucose, has a large interest in the (bio)chemical industry, thus, new CO2 emissions are avoided and the carbon cycle is closed. This project will provide a technological solution at industrial scale for the removal and reutilization of CO2.
Max ERC Funding
1 302 710 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym COBRAS
Project COvariance Based RAman Spectrometer (COBRAS)
Researcher (PI) Daniele FAUSTI
Host Institution (HI) ELETTRA - SINCROTRONE TRIESTE SCPA
Call Details Proof of Concept (PoC), ERC-2019-PoC
Summary In COBRAS we will establish Femtosecond Covariance Spectroscopy, a new spectroscopic technique to measure the optical response of material which is based on stochastic light pulses characterized by frequency uncorrelated intensity fluctuation. By using light with different property every repetition, each reiteration of the experiment can be considered as a measurement under new conditions rather than a repetition of the same experiment. Crucially, within the ERC_StG project INCEPT we have demonstrated that in this limit the frequency of the Raman modes of a sample can be retrieved by measuring the spectral correlations in different pulses which are induced by the interaction with the sample. This is in striking contrast with standard approaches to Raman spectroscopy which are based on the measurement of the integrated emission of Raman sidebands at a given frequency and therefore require a high stability and low noise detection which can be reached only at a significant expense. Conversely, in covariance-based methods noise is a resource that can be exploited (rather than an impediment) and a much simpler and cheaper architecture for the spectrometer can be envisioned.
The central idea of COBRAS is to set the way for commercial exploitation of covariance-based approaches to Raman spectroscopy. To this purpose we will develop a prototype spectrometer, study the general applicability of the covariance based methods and identify viable strategies for the commercialization of the spectrometer developed. We stress that the concept proposed here for Raman spectroscopy can be extended to different optical techniques and wavelength ranges. This make us confident that the COBRAS investment may represent a paradigmatic change in the approach to optical spectroscopy, potentially disclosing a new market across different industrial and scientific spectroscopic applications.
Summary
In COBRAS we will establish Femtosecond Covariance Spectroscopy, a new spectroscopic technique to measure the optical response of material which is based on stochastic light pulses characterized by frequency uncorrelated intensity fluctuation. By using light with different property every repetition, each reiteration of the experiment can be considered as a measurement under new conditions rather than a repetition of the same experiment. Crucially, within the ERC_StG project INCEPT we have demonstrated that in this limit the frequency of the Raman modes of a sample can be retrieved by measuring the spectral correlations in different pulses which are induced by the interaction with the sample. This is in striking contrast with standard approaches to Raman spectroscopy which are based on the measurement of the integrated emission of Raman sidebands at a given frequency and therefore require a high stability and low noise detection which can be reached only at a significant expense. Conversely, in covariance-based methods noise is a resource that can be exploited (rather than an impediment) and a much simpler and cheaper architecture for the spectrometer can be envisioned.
The central idea of COBRAS is to set the way for commercial exploitation of covariance-based approaches to Raman spectroscopy. To this purpose we will develop a prototype spectrometer, study the general applicability of the covariance based methods and identify viable strategies for the commercialization of the spectrometer developed. We stress that the concept proposed here for Raman spectroscopy can be extended to different optical techniques and wavelength ranges. This make us confident that the COBRAS investment may represent a paradigmatic change in the approach to optical spectroscopy, potentially disclosing a new market across different industrial and scientific spectroscopic applications.
Max ERC Funding
150 000 €
Duration
Start date: 2019-07-01, End date: 2020-12-31