Project acronym FLAMENCO
Project A Fully-Implantable MEMS-Based Autonomous Cochlear Implant
Researcher (PI) Kulah Haluk
Host Institution (HI) MIDDLE EAST TECHNICAL UNIVERSITY
Country Turkey
Call Details Consolidator Grant (CoG), PE7, ERC-2015-CoG
Summary Sensorineural impairment, representing the majority of the profound deafness, can be restored using cochlear implants (CIs), which electrically stimulates the auditory nerve to repair hearing in people with severe-to-profound hearing loss. A conventional CI consists of an external microphone, a sound processor, a battery, an RF transceiver pair, and a cochlear electrode. The major drawback of conventional CIs is that, they replace the entire natural hearing mechanism with electronic hearing, even though most parts of the middle ear are operational. Also, the power hungry units such as microphone and RF transceiver cause limitations in continuous access to sound due to battery problems. Besides, damage risk of external components especially if exposed to water and aesthetic concerns are other critical problems. Limited volume of the middle ear is the main obstacle for developing fully implantable CIs.
FLAMENCO proposes a fully implantable, autonomous, and low-power CI, exploiting the functional parts of the middle ear and mimicking the hair cells via a set of piezoelectric cantilevers to cover the daily acoustic band. FLAMENCO has a groundbreaking nature as it revolutionizes the operation principle of CIs. The implant has five main units: i) piezoelectric transducers for sound detection and energy harvesting, ii) electronics for signal processing and battery charging, iii) an RF coil for tuning the electronics to allow customization, iv) rechargeable battery, and v) cochlear electrode for neural stimulation. The utilization of internal energy harvesting together with the elimination of continuous RF transmission, microphone, and front-end filters makes this system a perfect candidate for next generation autonomous CIs. In this project, a multi-frequency self-powered implant for in vivo operation will be implemented, and the feasibility will be proven through animal tests.
Summary
Sensorineural impairment, representing the majority of the profound deafness, can be restored using cochlear implants (CIs), which electrically stimulates the auditory nerve to repair hearing in people with severe-to-profound hearing loss. A conventional CI consists of an external microphone, a sound processor, a battery, an RF transceiver pair, and a cochlear electrode. The major drawback of conventional CIs is that, they replace the entire natural hearing mechanism with electronic hearing, even though most parts of the middle ear are operational. Also, the power hungry units such as microphone and RF transceiver cause limitations in continuous access to sound due to battery problems. Besides, damage risk of external components especially if exposed to water and aesthetic concerns are other critical problems. Limited volume of the middle ear is the main obstacle for developing fully implantable CIs.
FLAMENCO proposes a fully implantable, autonomous, and low-power CI, exploiting the functional parts of the middle ear and mimicking the hair cells via a set of piezoelectric cantilevers to cover the daily acoustic band. FLAMENCO has a groundbreaking nature as it revolutionizes the operation principle of CIs. The implant has five main units: i) piezoelectric transducers for sound detection and energy harvesting, ii) electronics for signal processing and battery charging, iii) an RF coil for tuning the electronics to allow customization, iv) rechargeable battery, and v) cochlear electrode for neural stimulation. The utilization of internal energy harvesting together with the elimination of continuous RF transmission, microphone, and front-end filters makes this system a perfect candidate for next generation autonomous CIs. In this project, a multi-frequency self-powered implant for in vivo operation will be implemented, and the feasibility will be proven through animal tests.
Max ERC Funding
1 993 750 €
Duration
Start date: 2016-07-01, End date: 2022-06-30
Project acronym LABFER
Project Globalisation- and Technology-Driven Labour Market Change and Fertility
Researcher (PI) Anna MATYSIAK
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Country Poland
Call Details Consolidator Grant (CoG), SH3, ERC-2019-COG
Summary LABFER is the first project that will LABFER is the first project that will comprehensively describe and evaluate fertility consequences of the unprecedented changes in the labour market, caused by digitalisation and globalisation. These changes have been taking place during the last three decades and intensified after the Great Recession. They are reflected in: rising demand for skills, massive worker displacement, spread of new work arrangements, increasing work demands and growing inequalities in labour market prospects between the low-and-medium and the highly skilled. They are likely driving the post-crisis fertility decline in the most advanced nations, which is to date not understood. LABFER is thus highly relevant and timely. It has four main objectives:
1) to study the impact of the ongoing labour market change on fertility (macro-level);
2) to examine the individual-level mechanisms behind the observed macro-level fertility effects of the ongoing labour market change;
3) to investigate the role of the growing inequalities between the low-and-medium and the highly skilled for the relative fertility patterns of the two groups;
4) to study the role of family and employment policies in moderating the fertility effects of the labour market change.
Our methodological approach is innovative. We will link data at several layers of observation (country, region, industry, firm, couple and individual) to account for the policy, work and family context of childbearing. We will also use novel labour market measures to capture the ongoing labour market change. Mixture cure models will be employed to separate the effects of covariates on birth timing and probability that the birth occurs.
LABFER will break the ground by providing understanding of how the dynamic labour market changes are associated with and potentially affect the current and future fertility dynamics and its socio-economic gradients. It will also have implications for family and employment policies.
Summary
LABFER is the first project that will LABFER is the first project that will comprehensively describe and evaluate fertility consequences of the unprecedented changes in the labour market, caused by digitalisation and globalisation. These changes have been taking place during the last three decades and intensified after the Great Recession. They are reflected in: rising demand for skills, massive worker displacement, spread of new work arrangements, increasing work demands and growing inequalities in labour market prospects between the low-and-medium and the highly skilled. They are likely driving the post-crisis fertility decline in the most advanced nations, which is to date not understood. LABFER is thus highly relevant and timely. It has four main objectives:
1) to study the impact of the ongoing labour market change on fertility (macro-level);
2) to examine the individual-level mechanisms behind the observed macro-level fertility effects of the ongoing labour market change;
3) to investigate the role of the growing inequalities between the low-and-medium and the highly skilled for the relative fertility patterns of the two groups;
4) to study the role of family and employment policies in moderating the fertility effects of the labour market change.
Our methodological approach is innovative. We will link data at several layers of observation (country, region, industry, firm, couple and individual) to account for the policy, work and family context of childbearing. We will also use novel labour market measures to capture the ongoing labour market change. Mixture cure models will be employed to separate the effects of covariates on birth timing and probability that the birth occurs.
LABFER will break the ground by providing understanding of how the dynamic labour market changes are associated with and potentially affect the current and future fertility dynamics and its socio-economic gradients. It will also have implications for family and employment policies.
Max ERC Funding
1 998 100 €
Duration
Start date: 2020-10-01, End date: 2025-09-30
Project acronym LIPA
Project A unified theory of finite-state recognisability
Researcher (PI) Mikolaj Konstanty Bojanczyk
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Country Poland
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Finite-state devices like finite automata and monoids on finite words, or extensions to trees and infinite objects, are fundamental tools of logic in computer science. There are tens of models in the literature, ranging from finite automata on finite words to weighted automata on infinite trees. Many existing finite-state models share important similarities, like existence of canonical (minimal) devices, or decidability of emptiness, or a logic-automata connection. The first and primary goal of this project is to systematically investigate these similarities, and create a unified theory of finite-state devices, which:
1. covers the whole spectrum of existing finite-state devices, including settings with diverse inputs (e.g. words and trees, or infinite inputs, or infinite alphabets) and diverse outputs (e.g. Boolean like in the classical automata, or numbers like in weighted automata); and
2. sheds light on the correct notion of finite-state device in settings where there is no universally accepted choice or where finite-state devices have not been considered at all.
The theory of finite-state devices is one of those fields of theory where even the more advanced results have natural potential for applications. It is surprising and sad how little of this potential is normally realised, with most existing software using only the most rudimentary theoretical techniques. The second goal of the project is to create two tools which use more advanced aspects of the theory of automata to solve simple problems of wide applicability (i.e. at least tens of thousands of users):
1. a system that automatically grades exercises in automata, which goes beyond simple testing, and forces the students to write proofs
2. a system that uses learning to synthesise text transformations (such a search-and-replace, but also more powerful ones) by using examples
Summary
Finite-state devices like finite automata and monoids on finite words, or extensions to trees and infinite objects, are fundamental tools of logic in computer science. There are tens of models in the literature, ranging from finite automata on finite words to weighted automata on infinite trees. Many existing finite-state models share important similarities, like existence of canonical (minimal) devices, or decidability of emptiness, or a logic-automata connection. The first and primary goal of this project is to systematically investigate these similarities, and create a unified theory of finite-state devices, which:
1. covers the whole spectrum of existing finite-state devices, including settings with diverse inputs (e.g. words and trees, or infinite inputs, or infinite alphabets) and diverse outputs (e.g. Boolean like in the classical automata, or numbers like in weighted automata); and
2. sheds light on the correct notion of finite-state device in settings where there is no universally accepted choice or where finite-state devices have not been considered at all.
The theory of finite-state devices is one of those fields of theory where even the more advanced results have natural potential for applications. It is surprising and sad how little of this potential is normally realised, with most existing software using only the most rudimentary theoretical techniques. The second goal of the project is to create two tools which use more advanced aspects of the theory of automata to solve simple problems of wide applicability (i.e. at least tens of thousands of users):
1. a system that automatically grades exercises in automata, which goes beyond simple testing, and forces the students to write proofs
2. a system that uses learning to synthesise text transformations (such a search-and-replace, but also more powerful ones) by using examples
Max ERC Funding
1 768 125 €
Duration
Start date: 2016-05-01, End date: 2021-10-31
Project acronym NLL
Project Nonlinear Laser Lithography
Researcher (PI) Fatih oemer Ilday
Host Institution (HI) BILKENT UNIVERSITESI VAKIF
Country Turkey
Call Details Consolidator Grant (CoG), PE2, ERC-2013-CoG
Summary "Control of matter via light has always fascinated humankind; not surprisingly, laser patterning of materials is as old as the history of the laser. However, this approach has suffered to date from a stubborn lack of long-range order. We have recently discovered a method for regulating self-organised formation of metal-oxide nanostructures at high speed via non-local feedback, thereby achieving unprecedented levels of uniformity over indefinitely large areas by simply scanning the laser beam over the surface.
Here, we propose to develop hitherto unimaginable levels of control over matter through laser light. The total optical field at any point is determined by the incident laser field and scattered light from the surrounding surface, in a mathematical form similar to that of a hologram. Thus, it is only logical to control the self-organised pattern through the laser field using, e.g., a spatial light modulator. A simple wavefront tilt should change the periodicity of the nanostructures, but much more exciting possibilities include creation of patterns without translational symmetry, i.e., quasicrystals, or patterns evolving non-trivially under scanning, akin to cellular automata. Our initial results were obtained in ambient atmosphere, where oxygen is the dominant reactant, forming oxides. We further propose to control the chemistry by using a plasma jet to sputter a chosen reactive species onto the surface, which is activated by the laser. While we will focus on the basic mechanisms with atomic nitrogen as test reactant to generate compounds such as TiN and SiN, in principle, this approach paves the way to synthesis of an endless list of materials.
By bringing these ideas together, the foundations of revolutionary advances, straddling the boundaries of science fiction, can be laid: laser-controlled self-assembly of plethora of 2D patterns, crystals, and quasicrystals alike, eventually assembled layer by layer into the third dimension -- a 3D material synthesiser."
Summary
"Control of matter via light has always fascinated humankind; not surprisingly, laser patterning of materials is as old as the history of the laser. However, this approach has suffered to date from a stubborn lack of long-range order. We have recently discovered a method for regulating self-organised formation of metal-oxide nanostructures at high speed via non-local feedback, thereby achieving unprecedented levels of uniformity over indefinitely large areas by simply scanning the laser beam over the surface.
Here, we propose to develop hitherto unimaginable levels of control over matter through laser light. The total optical field at any point is determined by the incident laser field and scattered light from the surrounding surface, in a mathematical form similar to that of a hologram. Thus, it is only logical to control the self-organised pattern through the laser field using, e.g., a spatial light modulator. A simple wavefront tilt should change the periodicity of the nanostructures, but much more exciting possibilities include creation of patterns without translational symmetry, i.e., quasicrystals, or patterns evolving non-trivially under scanning, akin to cellular automata. Our initial results were obtained in ambient atmosphere, where oxygen is the dominant reactant, forming oxides. We further propose to control the chemistry by using a plasma jet to sputter a chosen reactive species onto the surface, which is activated by the laser. While we will focus on the basic mechanisms with atomic nitrogen as test reactant to generate compounds such as TiN and SiN, in principle, this approach paves the way to synthesis of an endless list of materials.
By bringing these ideas together, the foundations of revolutionary advances, straddling the boundaries of science fiction, can be laid: laser-controlled self-assembly of plethora of 2D patterns, crystals, and quasicrystals alike, eventually assembled layer by layer into the third dimension -- a 3D material synthesiser."
Max ERC Funding
1 999 920 €
Duration
Start date: 2014-06-01, End date: 2019-05-31
Project acronym PRAGMA
Project Pragmatics of Multiwinner Voting: Algorithms and Preference Data Analysis
Researcher (PI) Piotr FALISZEWSKI
Host Institution (HI) AKADEMIA GORNICZO-HUTNICZA IM. STANISLAWA STASZICA W KRAKOWIE
Country Poland
Call Details Consolidator Grant (CoG), PE6, ERC-2020-COG
Summary This proposal is in the area of computational social choice, an area on the intersection of computer science and economics. We study multiwinner elections, with a focus on a pragmatic approach. Our goal is to provide a principled framework for applying multiwinner voting in various settings that may appear in real-life (ranging from small-scale elections in various institutions, through participatory budgeting settings, to applications directly within computer science). In particular, we are interested in: (a) designing new, fast algorithms for computing the outcomes of multiwinner voting rules (results of such rules are often NP-hard to compute), also for new languages of specifying preferences that are needed in practical settings; (b) obtaining algorithmic and mathematical understanding of preference data; and (c) providing algorithms for analyzing elections and their results. We are interested both in theoretical studies (designing new algorithms, analyzing computational complexity of election-related problems, establishing axiomatic features of multiwinner voting rules, etc.) and in experimental evaluations (finding out running times of algorithms, establishing their approximation ratios, evaluating properties of preference data, etc.).
Summary
This proposal is in the area of computational social choice, an area on the intersection of computer science and economics. We study multiwinner elections, with a focus on a pragmatic approach. Our goal is to provide a principled framework for applying multiwinner voting in various settings that may appear in real-life (ranging from small-scale elections in various institutions, through participatory budgeting settings, to applications directly within computer science). In particular, we are interested in: (a) designing new, fast algorithms for computing the outcomes of multiwinner voting rules (results of such rules are often NP-hard to compute), also for new languages of specifying preferences that are needed in practical settings; (b) obtaining algorithmic and mathematical understanding of preference data; and (c) providing algorithms for analyzing elections and their results. We are interested both in theoretical studies (designing new algorithms, analyzing computational complexity of election-related problems, establishing axiomatic features of multiwinner voting rules, etc.) and in experimental evaluations (finding out running times of algorithms, establishing their approximation ratios, evaluating properties of preference data, etc.).
Max ERC Funding
1 386 290 €
Duration
Start date: 2021-06-01, End date: 2026-05-31
Project acronym TUgbOAT
Project Towards Unification of Algorithmic Tools
Researcher (PI) Piotr SANKOWSKI
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Country Poland
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary Over last 50 years, extensive algorithmic research gave rise to a plethora of fundamental results. These results equipped us with increasingly better solutions to a number of core problems. However, many of these solutions are incomparable. The main reason for that is the fact that many cutting-edge algorithmic results are very specialized in their applicability. Often, they are limited to particular parameter range or require different assumptions.
A natural question arises: is it possible to get “one to rule them all” algorithm for some core problems such as matchings and maximum flow? In other words, can we unify our algorithms? That is, can we develop an algorithmic framework that enables us to combine a number of existing, only “conditionally” optimal, algorithms into a single all-around optimal solution? Such results would unify the landscape of algorithmic theory but would also greatly enhance the impact of these cutting-edge developments on the real world. After all, algorithms and data structures are the basic building blocks of every computer program. However, currently using cutting-edge algorithms in an optimal way requires extensive expertise and thorough understanding of both the underlying implementation and the characteristics of the input data.
Hence, the need for such unified solutions seems to be critical from both theoretical and practical perspective. However, obtaining such algorithmic unification poses serious theoretical challenges. We believe that some of the recent advances in algorithms provide us with an opportunity to make serious progress towards solving these challenges in the context of several fundamental algorithmic problems. This project should be seen as the start of such a systematic study of unification of algorithmic tools with the aim to remove the need to “under the hood” while still guaranteeing an optimal performance independently of the particular usage case.
Summary
Over last 50 years, extensive algorithmic research gave rise to a plethora of fundamental results. These results equipped us with increasingly better solutions to a number of core problems. However, many of these solutions are incomparable. The main reason for that is the fact that many cutting-edge algorithmic results are very specialized in their applicability. Often, they are limited to particular parameter range or require different assumptions.
A natural question arises: is it possible to get “one to rule them all” algorithm for some core problems such as matchings and maximum flow? In other words, can we unify our algorithms? That is, can we develop an algorithmic framework that enables us to combine a number of existing, only “conditionally” optimal, algorithms into a single all-around optimal solution? Such results would unify the landscape of algorithmic theory but would also greatly enhance the impact of these cutting-edge developments on the real world. After all, algorithms and data structures are the basic building blocks of every computer program. However, currently using cutting-edge algorithms in an optimal way requires extensive expertise and thorough understanding of both the underlying implementation and the characteristics of the input data.
Hence, the need for such unified solutions seems to be critical from both theoretical and practical perspective. However, obtaining such algorithmic unification poses serious theoretical challenges. We believe that some of the recent advances in algorithms provide us with an opportunity to make serious progress towards solving these challenges in the context of several fundamental algorithmic problems. This project should be seen as the start of such a systematic study of unification of algorithmic tools with the aim to remove the need to “under the hood” while still guaranteeing an optimal performance independently of the particular usage case.
Max ERC Funding
1 510 800 €
Duration
Start date: 2018-09-01, End date: 2023-08-31