Project acronym BIOTENSORS
Project Biomedical Data Fusion using Tensor based Blind Source Separation
Researcher (PI) Sabine Jeanne A Van Huffel
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Advanced Grant (AdG), PE6, ERC-2013-ADG
Summary "Summary: the quest for a general functional tensor framework for blind source separation
Our overall objective is the development of a general functional framework for solving tensor based blind source separation (BSS) problems in biomedical data fusion, using tensor decompositions (TDs) as basic core. We claim that TDs will allow the extraction of fairly complicated sources of biomedical activity from fairly complicated sets of uni- and multimodal data. The power of the new techniques will be demonstrated for three well-chosen representative biomedical applications for which extensive expertise and fully validated datasets are available in the PI’s team, namely:
• Metabolite quantification and brain tumour tissue typing using Magnetic Resonance Spectroscopic Imaging,
• Functional monitoring including seizure detection and polysomnography,
• Cognitive brain functioning and seizure zone localization using simultaneous Electroencephalography-functional MR Imaging integration.
Solving these challenging problems requires that algorithmic progress is made in several directions:
• Algorithms need to be based on multilinear extensions of numerical linear algebra.
• New grounds for separation, such as representability in a given function class, need to be explored.
• Prior knowledge needs to be exploited via appropriate health relevant constraints.
• Biomedical data fusion requires the combination of TDs, coupled via relevant constraints.
• Algorithms for TD updating are important for continuous long-term patient monitoring.
The algorithms are eventually integrated in an easy-to-use open source software platform that is general enough for use in other BSS applications.
Having been involved in biomedical signal processing over a period of 20 years, the PI has a good overview of the field and the opportunities. By working directly at the forefront in close collaboration with the clinical scientists who actually use our software, we can have a huge impact."
Summary
"Summary: the quest for a general functional tensor framework for blind source separation
Our overall objective is the development of a general functional framework for solving tensor based blind source separation (BSS) problems in biomedical data fusion, using tensor decompositions (TDs) as basic core. We claim that TDs will allow the extraction of fairly complicated sources of biomedical activity from fairly complicated sets of uni- and multimodal data. The power of the new techniques will be demonstrated for three well-chosen representative biomedical applications for which extensive expertise and fully validated datasets are available in the PI’s team, namely:
• Metabolite quantification and brain tumour tissue typing using Magnetic Resonance Spectroscopic Imaging,
• Functional monitoring including seizure detection and polysomnography,
• Cognitive brain functioning and seizure zone localization using simultaneous Electroencephalography-functional MR Imaging integration.
Solving these challenging problems requires that algorithmic progress is made in several directions:
• Algorithms need to be based on multilinear extensions of numerical linear algebra.
• New grounds for separation, such as representability in a given function class, need to be explored.
• Prior knowledge needs to be exploited via appropriate health relevant constraints.
• Biomedical data fusion requires the combination of TDs, coupled via relevant constraints.
• Algorithms for TD updating are important for continuous long-term patient monitoring.
The algorithms are eventually integrated in an easy-to-use open source software platform that is general enough for use in other BSS applications.
Having been involved in biomedical signal processing over a period of 20 years, the PI has a good overview of the field and the opportunities. By working directly at the forefront in close collaboration with the clinical scientists who actually use our software, we can have a huge impact."
Max ERC Funding
2 500 000 €
Duration
Start date: 2014-04-01, End date: 2019-03-31
Project acronym CALCULUS
Project Commonsense and Anticipation enriched Learning of Continuous representations sUpporting Language UnderStanding
Researcher (PI) Marie-Francine MOENS
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Natural language understanding (NLU) by the machine is of large scientific, economic and social value. Humans perform the NLU task in an efficient way by relying on their capability to imagine or anticipate situations. They engage commonsense and world knowledge that is often acquired through perceptual experiences to make explicit what is left implicit in language. Inspired by these characteristics CALCULUS will design, implement and evaluate innovative paradigms supporting NLU, where it will combine old but powerful ideas for language understanding from the early days of artificial intelligence with new approaches from machine learning. The project focuses on the effective learning of anticipatory, continuous, non-symbolic representations of event frames and narrative structures of events that are trained on language and visual data. The grammatical structure of language is grounded in the geometric structure of visual data while embodying aspects of commonsense and world knowledge. The reusable representations are evaluated in a selection of NLU tasks requiring efficient real-time retrieval of the representations and parsing of the targeted written texts. Finally, we will evaluate the inference potential of the anticipatory representations in situations not seen in the training data and when inferring spatial and temporal information in metric real world spaces that is not mentioned in the processed language. The machine learning methods focus on learning latent variable models relying on Bayesian probabilistic models and neural networks and focus on settings with limited training data that are manually annotated. The best models will be integrated in a demonstrator that translates the language of stories to events happening in a 3-D virtual world. The PI has interdisciplinary expertise in natural language processing, joint processing of language and visual data, information retrieval and machine learning needed for the successful realization of the project.
Summary
Natural language understanding (NLU) by the machine is of large scientific, economic and social value. Humans perform the NLU task in an efficient way by relying on their capability to imagine or anticipate situations. They engage commonsense and world knowledge that is often acquired through perceptual experiences to make explicit what is left implicit in language. Inspired by these characteristics CALCULUS will design, implement and evaluate innovative paradigms supporting NLU, where it will combine old but powerful ideas for language understanding from the early days of artificial intelligence with new approaches from machine learning. The project focuses on the effective learning of anticipatory, continuous, non-symbolic representations of event frames and narrative structures of events that are trained on language and visual data. The grammatical structure of language is grounded in the geometric structure of visual data while embodying aspects of commonsense and world knowledge. The reusable representations are evaluated in a selection of NLU tasks requiring efficient real-time retrieval of the representations and parsing of the targeted written texts. Finally, we will evaluate the inference potential of the anticipatory representations in situations not seen in the training data and when inferring spatial and temporal information in metric real world spaces that is not mentioned in the processed language. The machine learning methods focus on learning latent variable models relying on Bayesian probabilistic models and neural networks and focus on settings with limited training data that are manually annotated. The best models will be integrated in a demonstrator that translates the language of stories to events happening in a 3-D virtual world. The PI has interdisciplinary expertise in natural language processing, joint processing of language and visual data, information retrieval and machine learning needed for the successful realization of the project.
Max ERC Funding
2 227 500 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym COGNIMUND
Project Cognitive Image Understanding: Image representations and Multimodal learning
Researcher (PI) Tinne Tuytelaars
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary One of the primary and most appealing goals of computer vision is to automatically understand the content of images on a cognitive level. Ultimately we want to have computers interpret images as we humans do, recognizing all the objects, scenes, and people as well as their relations as they appear in natural images or video. With this project, I want to advance the state of the art in this field in two directions, which I believe to be crucial to build the next generation of image understanding tools. First, novel more robust yet descriptive image representations will be designed, that incorporate the intrinsic structure of images. These should already go a long way towards removing irrelevant sources of variability while capturing the essence of the image content. I believe the importance of further research into image representations is currently underestimated within the research community, yet I claim this is a crucial step with lots of opportunities good learning cannot easily make up for bad features. Second, weakly supervised methods to learn from multimodal input (especially the combination of images and text) will be investigated, making it possible to leverage the large amount of weak annotations available via the internet. This is essential if we want to scale the methods to a larger number of object categories (several hundreds instead of a few tens). As more data can be used for training, such weakly supervised methods might in the end even come on par with or outperform supervised schemes. Here we will call upon the latest results in semi-supervised learning, datamining, and computational linguistics.
Summary
One of the primary and most appealing goals of computer vision is to automatically understand the content of images on a cognitive level. Ultimately we want to have computers interpret images as we humans do, recognizing all the objects, scenes, and people as well as their relations as they appear in natural images or video. With this project, I want to advance the state of the art in this field in two directions, which I believe to be crucial to build the next generation of image understanding tools. First, novel more robust yet descriptive image representations will be designed, that incorporate the intrinsic structure of images. These should already go a long way towards removing irrelevant sources of variability while capturing the essence of the image content. I believe the importance of further research into image representations is currently underestimated within the research community, yet I claim this is a crucial step with lots of opportunities good learning cannot easily make up for bad features. Second, weakly supervised methods to learn from multimodal input (especially the combination of images and text) will be investigated, making it possible to leverage the large amount of weak annotations available via the internet. This is essential if we want to scale the methods to a larger number of object categories (several hundreds instead of a few tens). As more data can be used for training, such weakly supervised methods might in the end even come on par with or outperform supervised schemes. Here we will call upon the latest results in semi-supervised learning, datamining, and computational linguistics.
Max ERC Funding
1 538 380 €
Duration
Start date: 2010-02-01, End date: 2015-01-31
Project acronym COLORAMAP
Project Constrained Low-Rank Matrix Approximations: Theoretical and Algorithmic Developments for Practitioners
Researcher (PI) Nicolas Benoit P Gillis
Host Institution (HI) UNIVERSITE DE MONS
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary Low-rank matrix approximation (LRA) techniques such as principal component analysis (PCA) are powerful tools for the representation and analysis of high dimensional data, and are used in a wide variety of areas such as machine learning, signal and image processing, data mining, and optimization. Without any constraints and using the least squares error, LRA can be solved via the singular value decomposition. However, in practice, this model is often not suitable mainly because (i) the data might be contaminated with outliers, missing data and non-Gaussian noise, and (ii) the low-rank factors of the decomposition might have to satisfy some specific constraints. Hence, in recent years, many variants of LRA have been introduced, using different constraints on the factors and using different objective functions to assess the quality of the approximation; e.g., sparse PCA, PCA with missing data, independent component analysis and nonnegative matrix factorization. Although these new constrained LRA models have become very popular and standard in some fields, there is still a significant gap between theory and practice. In this project, our goal is to reduce this gap by attacking the problem in an integrated way making connections between LRA variants, and by using four very different but complementary perspectives: (1) computational complexity issues, (2) provably correct algorithms, (3) heuristics for difficult instances, and (4) application-oriented aspects. This unified and multi-disciplinary approach will enable us to understand these problems better, to develop and analyze new and existing algorithms and to then use them for applications. Our ultimate goal is to provide practitioners with new tools and to allow them to decide which method to use in which situation and to know what to expect from it.
Summary
Low-rank matrix approximation (LRA) techniques such as principal component analysis (PCA) are powerful tools for the representation and analysis of high dimensional data, and are used in a wide variety of areas such as machine learning, signal and image processing, data mining, and optimization. Without any constraints and using the least squares error, LRA can be solved via the singular value decomposition. However, in practice, this model is often not suitable mainly because (i) the data might be contaminated with outliers, missing data and non-Gaussian noise, and (ii) the low-rank factors of the decomposition might have to satisfy some specific constraints. Hence, in recent years, many variants of LRA have been introduced, using different constraints on the factors and using different objective functions to assess the quality of the approximation; e.g., sparse PCA, PCA with missing data, independent component analysis and nonnegative matrix factorization. Although these new constrained LRA models have become very popular and standard in some fields, there is still a significant gap between theory and practice. In this project, our goal is to reduce this gap by attacking the problem in an integrated way making connections between LRA variants, and by using four very different but complementary perspectives: (1) computational complexity issues, (2) provably correct algorithms, (3) heuristics for difficult instances, and (4) application-oriented aspects. This unified and multi-disciplinary approach will enable us to understand these problems better, to develop and analyze new and existing algorithms and to then use them for applications. Our ultimate goal is to provide practitioners with new tools and to allow them to decide which method to use in which situation and to know what to expect from it.
Max ERC Funding
1 291 750 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym CRASH
Project CRyptographic Algorithms and Secure Hardware
Researcher (PI) François-Xavier Standaert
Host Institution (HI) UNIVERSITE CATHOLIQUE DE LOUVAIN
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary Side-channel attacks are an important threat against cryptographic implementations in which an adversary takes advantage of physical leakages, such as the power consumption of a smart card, in order to recover secret information. By circumventing the models in which standard security proofs are obtained, they can lead to powerful attacks against a large class of devices. As a consequence, formalizing implementation security and efficiently preventing side-channel attacks is one of the most challenging open problems in modern cryptography. Physical attacks imply new optimization criteria, with potential impact on the way we conceive algorithms and the way we design circuits. By putting together mathematical and electrical engineering problems, just as they are raised in reality, the CRASH project is expected to develop concrete basements for the next generation of cryptographic algorithms and their implementation. For this purpose, three main directions will be considered. First, we will investigate sound evaluation tools for side-channel attacks and validate them on different prototype chips. Second, we will consider the impact of physical attacks on the mathematical aspects of cryptography, both destructively (i.e. by developing new attacks and advanced cryptanalysis tools) and constructively (i.e. by investigating new cipher designs and security proof techniques). Third, we will evaluate the possibility to integrate physical security analysis into the design tools of integrated circuits (e.g. in order to obtain “physical security aware” compilers). Summarizing, this project aims to break the barrier between the abstractions of mathematical cryptography and the concrete peculiarities of physical security in present microelectronic devices. By considering the system and algorithmic issues in a unified way, it is expected to get rid of the incompatibilities between the separate formalisms that are usually considered in order to explain these concurrent realities.
Summary
Side-channel attacks are an important threat against cryptographic implementations in which an adversary takes advantage of physical leakages, such as the power consumption of a smart card, in order to recover secret information. By circumventing the models in which standard security proofs are obtained, they can lead to powerful attacks against a large class of devices. As a consequence, formalizing implementation security and efficiently preventing side-channel attacks is one of the most challenging open problems in modern cryptography. Physical attacks imply new optimization criteria, with potential impact on the way we conceive algorithms and the way we design circuits. By putting together mathematical and electrical engineering problems, just as they are raised in reality, the CRASH project is expected to develop concrete basements for the next generation of cryptographic algorithms and their implementation. For this purpose, three main directions will be considered. First, we will investigate sound evaluation tools for side-channel attacks and validate them on different prototype chips. Second, we will consider the impact of physical attacks on the mathematical aspects of cryptography, both destructively (i.e. by developing new attacks and advanced cryptanalysis tools) and constructively (i.e. by investigating new cipher designs and security proof techniques). Third, we will evaluate the possibility to integrate physical security analysis into the design tools of integrated circuits (e.g. in order to obtain “physical security aware” compilers). Summarizing, this project aims to break the barrier between the abstractions of mathematical cryptography and the concrete peculiarities of physical security in present microelectronic devices. By considering the system and algorithmic issues in a unified way, it is expected to get rid of the incompatibilities between the separate formalisms that are usually considered in order to explain these concurrent realities.
Max ERC Funding
1 498 874 €
Duration
Start date: 2011-10-01, End date: 2016-09-30
Project acronym CUTACOMBS
Project Cuts and decompositions: algorithms and combinatorial properties
Researcher (PI) Marcin PILIPCZUK
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Call Details Starting Grant (StG), PE6, ERC-2016-STG
Summary In this proposal we plan to extend mathematical foundations of algorithms for various variants of the minimum cut problem within theoretical computer science.
Recent advances in understanding the structure of small cuts and tractability of cut problems resulted in a mature algorithmic toolbox for undirected graphs under the paradigm of parameterized complexity. In this position, we now aim at a full understanding of the tractability of cut problems in the more challenging case of directed graphs, and see opportunities to apply the aforementioned successful structural approach to advance on major open problems in other paradigms in theoretical computer science.
The specific goals of the project are grouped in the following three themes.
Directed graphs. Chart the parameterized complexity of graph separation problems in directed graphs and provide a fixed-parameter tractability toolbox, equally deep as the one in undirected graphs. Provide tractability foundations for routing problems in directed graphs, such as the disjoint paths problem with symmetric demands.
Planar graphs. Resolve main open problems with respect to network design and graph separation problems in planar graphs under the following three paradigms: parameterized complexity, approximation schemes, and cut/flow/distance sparsifiers. Recently discovered connections uncover significant potential in synergy between these three algorithmic approaches.
Tree decompositions. Show improved tractability of graph isomorphism testing in sparse graph classes. Combine the algorithmic toolbox of parameterized complexity with the theory of minimal triangulations to advance our knowledge in structural graph theory, both pure (focused on the Erdos-Hajnal conjecture) and algorithmic (focused on the tractability of Maximum Independent Set and 3-Coloring).
Summary
In this proposal we plan to extend mathematical foundations of algorithms for various variants of the minimum cut problem within theoretical computer science.
Recent advances in understanding the structure of small cuts and tractability of cut problems resulted in a mature algorithmic toolbox for undirected graphs under the paradigm of parameterized complexity. In this position, we now aim at a full understanding of the tractability of cut problems in the more challenging case of directed graphs, and see opportunities to apply the aforementioned successful structural approach to advance on major open problems in other paradigms in theoretical computer science.
The specific goals of the project are grouped in the following three themes.
Directed graphs. Chart the parameterized complexity of graph separation problems in directed graphs and provide a fixed-parameter tractability toolbox, equally deep as the one in undirected graphs. Provide tractability foundations for routing problems in directed graphs, such as the disjoint paths problem with symmetric demands.
Planar graphs. Resolve main open problems with respect to network design and graph separation problems in planar graphs under the following three paradigms: parameterized complexity, approximation schemes, and cut/flow/distance sparsifiers. Recently discovered connections uncover significant potential in synergy between these three algorithmic approaches.
Tree decompositions. Show improved tractability of graph isomorphism testing in sparse graph classes. Combine the algorithmic toolbox of parameterized complexity with the theory of minimal triangulations to advance our knowledge in structural graph theory, both pure (focused on the Erdos-Hajnal conjecture) and algorithmic (focused on the tractability of Maximum Independent Set and 3-Coloring).
Max ERC Funding
1 228 250 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym DEMIURGE
Project Automatic Design of Robot Swarms
Researcher (PI) Mauro Birattari
Host Institution (HI) UNIVERSITE LIBRE DE BRUXELLES
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary The scope of this project is the automatic design of robot swarms. Swarm robotics is an appealing approach to the coordination of large groups of robots. Up to now, robot swarms have been designed via some labor-intensive process.
My goal is to advance the state of the art in swarm robotics by developing the DEMIURGE: an intelligent system that is able to design and realize robot swarms in a totally integrated and automatic way
The DEMIURGE is a novel concept. Starting from requirements expressed in a specification language that I will define, the DEMIURGE will design all aspects of a robot swarm - hardware and control software.
The DEMIURGE will cast a design problem into an optimization problem and will tackle it in a computation-intensive way. In this project, I will study different control software structures, optimization algorithms, ways to specify requirements, validation protocols, on-line adaptation mechanisms and techniques for re-design at run time.
Summary
The scope of this project is the automatic design of robot swarms. Swarm robotics is an appealing approach to the coordination of large groups of robots. Up to now, robot swarms have been designed via some labor-intensive process.
My goal is to advance the state of the art in swarm robotics by developing the DEMIURGE: an intelligent system that is able to design and realize robot swarms in a totally integrated and automatic way
The DEMIURGE is a novel concept. Starting from requirements expressed in a specification language that I will define, the DEMIURGE will design all aspects of a robot swarm - hardware and control software.
The DEMIURGE will cast a design problem into an optimization problem and will tackle it in a computation-intensive way. In this project, I will study different control software structures, optimization algorithms, ways to specify requirements, validation protocols, on-line adaptation mechanisms and techniques for re-design at run time.
Max ERC Funding
2 000 000 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym DISPATCH Neuro-Sense
Project Distributed Signal Processing Algorithms for Chronic Neuro-Sensor Networks
Researcher (PI) Alexander BERTRAND
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary The possibility to chronically monitor the brain 24/7 in daily-life activities would revolutionize human-machine interactions and health care, e.g., in the context of neuroprostheses, neurological disorders, and brain-computer interfaces (BCI). Such chronic systems must satisfy challenging energy and miniaturization constraints, leading to modular designs in which multiple networked miniature neuro-sensor modules form a ‘neuro-sensor network’ (NSN).
However, current multi-channel neural signal processing (NSP) algorithms were designed for traditional neuro-sensor arrays with central access to all channels. These algorithms are not suited for NSNs, as they require unrealistic bandwidth budgets to centralize the data, yet a joint neural data analysis across NSN modules is crucial.
The central idea of this project is to remove this algorithm bottleneck by designing novel scalable, distributed NSP algorithms to let the modules of an NSN jointly process the recorded neural data through in-network data fusion and with a minimal exchange of data.
To guarantee impact, we mainly focus on establishing a new non-invasive NSN concept based on electroencephalography (EEG). By combining multiple ‘smart’ mini-EEG modules into an ‘EEG sensor network’ (EEG-Net), we compensate for the lack of spatial information captured by current stand-alone mini-EEG devices, without compromising in ‘wearability’. Equipping such EEG-Nets with distributed NSP algorithms will allow to process high-density EEG data at viable energy levels, which is a game changer towards high-performance chronic EEG for, e.g., epilepsy monitoring, neuroprostheses, and BCI.
We will validate these claims in an EEG-Net prototype in the above 3 use cases, benefiting from ongoing collaborations with the KUL university hospital. In addition, to demonstrate the general applicability of our novel NSP algorithms, we will validate them in other emerging NSN types as well, such as modular or untethered neural probes.
Summary
The possibility to chronically monitor the brain 24/7 in daily-life activities would revolutionize human-machine interactions and health care, e.g., in the context of neuroprostheses, neurological disorders, and brain-computer interfaces (BCI). Such chronic systems must satisfy challenging energy and miniaturization constraints, leading to modular designs in which multiple networked miniature neuro-sensor modules form a ‘neuro-sensor network’ (NSN).
However, current multi-channel neural signal processing (NSP) algorithms were designed for traditional neuro-sensor arrays with central access to all channels. These algorithms are not suited for NSNs, as they require unrealistic bandwidth budgets to centralize the data, yet a joint neural data analysis across NSN modules is crucial.
The central idea of this project is to remove this algorithm bottleneck by designing novel scalable, distributed NSP algorithms to let the modules of an NSN jointly process the recorded neural data through in-network data fusion and with a minimal exchange of data.
To guarantee impact, we mainly focus on establishing a new non-invasive NSN concept based on electroencephalography (EEG). By combining multiple ‘smart’ mini-EEG modules into an ‘EEG sensor network’ (EEG-Net), we compensate for the lack of spatial information captured by current stand-alone mini-EEG devices, without compromising in ‘wearability’. Equipping such EEG-Nets with distributed NSP algorithms will allow to process high-density EEG data at viable energy levels, which is a game changer towards high-performance chronic EEG for, e.g., epilepsy monitoring, neuroprostheses, and BCI.
We will validate these claims in an EEG-Net prototype in the above 3 use cases, benefiting from ongoing collaborations with the KUL university hospital. In addition, to demonstrate the general applicability of our novel NSP algorithms, we will validate them in other emerging NSN types as well, such as modular or untethered neural probes.
Max ERC Funding
1 489 656 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym DPMP
Project Dependable Performance on Many-Thread Processors
Researcher (PI) Lieven Eeckhout
Host Institution (HI) UNIVERSITEIT GENT
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary Contemporary microprocessors seek at improving performance through thread-level parallelism by co-executing multiple threads on a single microprocessor chip. Projections suggest that future processors will feature multiple tens to hundreds of threads, hence called many-thread processors. Many-thread processors, however, lead to non-dependable performance: co-executing threads affect each other s performance in unpredictable ways because of resource sharing across threads. Failure to deliver dependable performance leads to missed deadlines, priority inversion, unbalanced parallel execution, etc., which will severely impact the usage model and the performance growth path for many important future and emerging application domains (e.g., media, medical, datacenter).
DPMP envisions that performance introspection using a cycle accounting architecture that tracks per-thread performance, will be the breakthrough to delivering dependable performance in future many-thread processors. To this end, DPMP will develop a hardware cycle accounting architecture that estimates single-thread progress during many-thread execution. The ability to track per-thread progress enables system software to deliver dependable performance by assigning hardware resources to threads depending on their relative progress. Through this cooperative hardware-software approach, this project addresses a fundamental problem in multi-threaded ad multi/many-core processing.
Summary
Contemporary microprocessors seek at improving performance through thread-level parallelism by co-executing multiple threads on a single microprocessor chip. Projections suggest that future processors will feature multiple tens to hundreds of threads, hence called many-thread processors. Many-thread processors, however, lead to non-dependable performance: co-executing threads affect each other s performance in unpredictable ways because of resource sharing across threads. Failure to deliver dependable performance leads to missed deadlines, priority inversion, unbalanced parallel execution, etc., which will severely impact the usage model and the performance growth path for many important future and emerging application domains (e.g., media, medical, datacenter).
DPMP envisions that performance introspection using a cycle accounting architecture that tracks per-thread performance, will be the breakthrough to delivering dependable performance in future many-thread processors. To this end, DPMP will develop a hardware cycle accounting architecture that estimates single-thread progress during many-thread execution. The ability to track per-thread progress enables system software to deliver dependable performance by assigning hardware resources to threads depending on their relative progress. Through this cooperative hardware-software approach, this project addresses a fundamental problem in multi-threaded ad multi/many-core processing.
Max ERC Funding
1 389 000 €
Duration
Start date: 2010-10-01, End date: 2016-09-30
Project acronym E-SWARM
Project Engineering Swarm Intelligence Systems
Researcher (PI) Marco Dorigo
Host Institution (HI) UNIVERSITE LIBRE DE BRUXELLES
Call Details Advanced Grant (AdG), PE6, ERC-2009-AdG
Summary Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. In this project, we focus on the design and implementation of artificial swarm intelligence systems for the solution of complex problems. Our current understanding of how to use swarms of artificial agents largely relies on rules of thumb and intuition based on the experience of individual researchers. This is not sufficient for us to design swarm intelligence systems at the level of complexity required by many real-world applications, or to accurately predict the behavior of the systems we design. The goal of the E-SWARM is to develop a rigorous engineering methodology for the design and implementation of artificial swarm intelligence systems. We believe that in the future, swarm intelligence will be an important tool for researchers and engineers interested in solving certain classes of complex problems. To build the foundations of this discipline and to develop an appropriate methodology, we will proceed in parallel both at an abstract level and by tackling a number of challenging problems in selected research domains. The research domains we have chosen are optimization, robotics, networks, and data mining.
Summary
Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. In this project, we focus on the design and implementation of artificial swarm intelligence systems for the solution of complex problems. Our current understanding of how to use swarms of artificial agents largely relies on rules of thumb and intuition based on the experience of individual researchers. This is not sufficient for us to design swarm intelligence systems at the level of complexity required by many real-world applications, or to accurately predict the behavior of the systems we design. The goal of the E-SWARM is to develop a rigorous engineering methodology for the design and implementation of artificial swarm intelligence systems. We believe that in the future, swarm intelligence will be an important tool for researchers and engineers interested in solving certain classes of complex problems. To build the foundations of this discipline and to develop an appropriate methodology, we will proceed in parallel both at an abstract level and by tackling a number of challenging problems in selected research domains. The research domains we have chosen are optimization, robotics, networks, and data mining.
Max ERC Funding
2 016 000 €
Duration
Start date: 2010-06-01, End date: 2015-05-31
Project acronym FOREFRONT
Project Frontiers of Extended Formulations
Researcher (PI) Samuel Fiorini
Host Institution (HI) UNIVERSITE LIBRE DE BRUXELLES
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary "Linear programming has proved to be an invaluable tool both in theory and practice. Semidefinite programming surpasses linear programming in terms of expressivity while remaining tractable. This project proposal investigates the modeling power of linear and semidefinite programming, in the context of combinatorial optimization. Within the emerging framework of extended formulations (EFs), I seek a decisive answer to the following question: Which problems can be modeled by a linear or semidefinite program, when the number of constraints and variables are limited? EFs are based on the idea that one should choose the ""right"" variables to model a problem. By extending the set of variables of a problem by a few carefully chosen variables, the number of constraints can in some cases dramatically decrease, making the problem easier to solve. Despite previous high-quality research, the theory of EFs is still on square one. This project proposal aims at (i) transforming our current zero-dimensional state of knowledge to a truly three-dimensional state of knowledge by pushing the boundaries of EFs in three directions (models, types and problems); (ii) using EFs as a lens on complexity by proving strong consequences of important conjectures such as P != NP, and leveraging strong connections to geometry to make progress on the log-rank conjecture. The proposed methodology is: (i) experiment-aided; (ii) interdisciplinary; (iii) constructive."
Summary
"Linear programming has proved to be an invaluable tool both in theory and practice. Semidefinite programming surpasses linear programming in terms of expressivity while remaining tractable. This project proposal investigates the modeling power of linear and semidefinite programming, in the context of combinatorial optimization. Within the emerging framework of extended formulations (EFs), I seek a decisive answer to the following question: Which problems can be modeled by a linear or semidefinite program, when the number of constraints and variables are limited? EFs are based on the idea that one should choose the ""right"" variables to model a problem. By extending the set of variables of a problem by a few carefully chosen variables, the number of constraints can in some cases dramatically decrease, making the problem easier to solve. Despite previous high-quality research, the theory of EFs is still on square one. This project proposal aims at (i) transforming our current zero-dimensional state of knowledge to a truly three-dimensional state of knowledge by pushing the boundaries of EFs in three directions (models, types and problems); (ii) using EFs as a lens on complexity by proving strong consequences of important conjectures such as P != NP, and leveraging strong connections to geometry to make progress on the log-rank conjecture. The proposed methodology is: (i) experiment-aided; (ii) interdisciplinary; (iii) constructive."
Max ERC Funding
1 455 479 €
Duration
Start date: 2014-09-01, End date: 2019-08-31
Project acronym FORSIED
Project Formalizing Subjective Interestingness in Exploratory Data Mining
Researcher (PI) Tijl De Bie
Host Institution (HI) UNIVERSITEIT GENT
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary "The rate at which research labs, enterprises and governments accumulate data is high and fast increasing. Often, these data are collected for no specific purpose, or they turn out to be useful for unanticipated purposes: Companies constantly look for new ways to monetize their customer databases; Governments mine various databases to detect tax fraud; Security agencies mine and cross-associate numerous heterogeneous information streams from publicly accessible and classified databases to understand and detect security threats. The objective in such Exploratory Data Mining (EDM) tasks is typically ill-defined, i.e. it is unclear how to formalize how interesting a pattern extracted from the data is. As a result, EDM is often a slow process of trial and error.
During this fellowship we aim to develop the mathematical principles of what makes a pattern interesting in a very subjective sense. Crucial in this endeavour will be research into automatic mechanisms to model and duly consider the prior beliefs and expectations of the user for whom the EDM patterns are intended, thus relieving the users of the complex task to attempt to formalize themselves what makes a pattern interesting to them.
This project will represent a radical change in how EDM research is done. Currently, researchers typically imagine a specific purpose for the patterns, try to formalize interestingness of such patterns given that purpose, and design an algorithm to mine them. However, given the variety of users, this strategy has led to a multitude of algorithms. As a result, users need to be data mining experts to understand which algorithm applies to their situation. To resolve this, we will develop a theoretically solid framework for the design of EDM systems that model the user's beliefs and expectations as much as the data itself, so as to maximize the amount of useful information transmitted to the user. This will ultimately bring the power of EDM within reach of the non-expert."
Summary
"The rate at which research labs, enterprises and governments accumulate data is high and fast increasing. Often, these data are collected for no specific purpose, or they turn out to be useful for unanticipated purposes: Companies constantly look for new ways to monetize their customer databases; Governments mine various databases to detect tax fraud; Security agencies mine and cross-associate numerous heterogeneous information streams from publicly accessible and classified databases to understand and detect security threats. The objective in such Exploratory Data Mining (EDM) tasks is typically ill-defined, i.e. it is unclear how to formalize how interesting a pattern extracted from the data is. As a result, EDM is often a slow process of trial and error.
During this fellowship we aim to develop the mathematical principles of what makes a pattern interesting in a very subjective sense. Crucial in this endeavour will be research into automatic mechanisms to model and duly consider the prior beliefs and expectations of the user for whom the EDM patterns are intended, thus relieving the users of the complex task to attempt to formalize themselves what makes a pattern interesting to them.
This project will represent a radical change in how EDM research is done. Currently, researchers typically imagine a specific purpose for the patterns, try to formalize interestingness of such patterns given that purpose, and design an algorithm to mine them. However, given the variety of users, this strategy has led to a multitude of algorithms. As a result, users need to be data mining experts to understand which algorithm applies to their situation. To resolve this, we will develop a theoretically solid framework for the design of EDM systems that model the user's beliefs and expectations as much as the data itself, so as to maximize the amount of useful information transmitted to the user. This will ultimately bring the power of EDM within reach of the non-expert."
Max ERC Funding
1 549 315 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym IMPaCT
Project Implementing Multi-Party Computation Technology
Researcher (PI) Nigel Paul Smart
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary The goal of IMPaCT is to turn Multi-Party Computation (MPC) from a stage in which we are beginning to obtain practical feasibility results, to a stage in which we have fully practical systems. It has long been acknowledged that MPC has the potential to provide a transformative change in the way security solutions are enabled. As it presently stands this is currently only possible in limited applications; deployments in restricted scenarios are beginning to emerge. However, in turning MPC into a fully practical technology a number of key scientific challenges need to be solved; many of which have not yet even been considered in the theoretical literature. The IMPaCT project aims to address this scientific gap, bridge it, and so provide the tools for a future road-map in which MPC can be deployed as a widespread tool, as ubiquitous as encryption and digital signatures are today.
Our scientific approach will be to investigate new MPC protocols and techniques which take into account practical constraints and issues which would arise in future application scenarios. Our work, despite being scientifically rigorous and driven from deep theoretical insight, will be grounded in practical considerations. All systems and protocols proposed will be prototyped so as to ensure that practical real world issues are taken into account. In addition we will use our extensive industrial linkages to ensure a two way dialogue between potential users and the developers of MPC technology; thus helping to embed future impact of the work in IMPaCT.
Our workplan is focused around key scientific challenges which we have identified on the road to fully practical MPC applications. These include the design of methodologies to cope with the asynchronicity of networks, how to realistically measure and model MPC protocols performance, how to utilize low round complexity protocols in practice, how to deal with problems with large input sizes (e.g. streaming data), and many more.
Summary
The goal of IMPaCT is to turn Multi-Party Computation (MPC) from a stage in which we are beginning to obtain practical feasibility results, to a stage in which we have fully practical systems. It has long been acknowledged that MPC has the potential to provide a transformative change in the way security solutions are enabled. As it presently stands this is currently only possible in limited applications; deployments in restricted scenarios are beginning to emerge. However, in turning MPC into a fully practical technology a number of key scientific challenges need to be solved; many of which have not yet even been considered in the theoretical literature. The IMPaCT project aims to address this scientific gap, bridge it, and so provide the tools for a future road-map in which MPC can be deployed as a widespread tool, as ubiquitous as encryption and digital signatures are today.
Our scientific approach will be to investigate new MPC protocols and techniques which take into account practical constraints and issues which would arise in future application scenarios. Our work, despite being scientifically rigorous and driven from deep theoretical insight, will be grounded in practical considerations. All systems and protocols proposed will be prototyped so as to ensure that practical real world issues are taken into account. In addition we will use our extensive industrial linkages to ensure a two way dialogue between potential users and the developers of MPC technology; thus helping to embed future impact of the work in IMPaCT.
Our workplan is focused around key scientific challenges which we have identified on the road to fully practical MPC applications. These include the design of methodologies to cope with the asynchronicity of networks, how to realistically measure and model MPC protocols performance, how to utilize low round complexity protocols in practice, how to deal with problems with large input sizes (e.g. streaming data), and many more.
Max ERC Funding
2 499 938 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym INVEST
Project inVEST: Foundations for a Shift from Verification to Synthesis
Researcher (PI) Jean-Francois Raskin
Host Institution (HI) UNIVERSITE LIBRE DE BRUXELLES
Call Details Starting Grant (StG), PE6, ERC-2011-StG_20101014
Summary Reactive systems are computer systems that maintain a continuous interaction with the environment in which they execute. Examples of reactive systems are controllers embedded in cars or planes, system level software, device drivers, communication protocols, etc. On the one hand, those systems are notoriously difficult to develop correctly (because of characteristics like concurrency, real-time constraints, parallelism, etc). And on the other hand, their correctness is often critical as they are used in contexts where safety is an issue, or because of economical reasons related to mass production.
To ensure reliability of reactive systems, advanced verification techniques have been developed. One particularly successful approach is model-checking. Nevertheless, model-checking is used to find bugs in designs but it does not support the design itself.
In this project, we want to develop new algorithms and tools to support the automatic synthesis of modern reactive systems (instead of their verification a posteriori). We aim at a shift from verification to synthesis. To allow this shift, we need new foundations: we propose to generalize transition systems and automata – models of computation in the classical approach to verification – by the more flexible, and mathematically deeper, game-theoretic framework. Our work will be of fundamental nature but will also aim at the development of algorithms and tools. Those new foundations will allow for the development of a new generation of computer-aided design tools that will support the automatic synthesis of modern reactive systems and ensure correctness by construction.
Summary
Reactive systems are computer systems that maintain a continuous interaction with the environment in which they execute. Examples of reactive systems are controllers embedded in cars or planes, system level software, device drivers, communication protocols, etc. On the one hand, those systems are notoriously difficult to develop correctly (because of characteristics like concurrency, real-time constraints, parallelism, etc). And on the other hand, their correctness is often critical as they are used in contexts where safety is an issue, or because of economical reasons related to mass production.
To ensure reliability of reactive systems, advanced verification techniques have been developed. One particularly successful approach is model-checking. Nevertheless, model-checking is used to find bugs in designs but it does not support the design itself.
In this project, we want to develop new algorithms and tools to support the automatic synthesis of modern reactive systems (instead of their verification a posteriori). We aim at a shift from verification to synthesis. To allow this shift, we need new foundations: we propose to generalize transition systems and automata – models of computation in the classical approach to verification – by the more flexible, and mathematically deeper, game-theoretic framework. Our work will be of fundamental nature but will also aim at the development of algorithms and tools. Those new foundations will allow for the development of a new generation of computer-aided design tools that will support the automatic synthesis of modern reactive systems and ensure correctness by construction.
Max ERC Funding
1 415 255 €
Duration
Start date: 2012-01-01, End date: 2017-09-30
Project acronym LIPA
Project A unified theory of finite-state recognisability
Researcher (PI) Mikolaj Konstanty Bojanczyk
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Finite-state devices like finite automata and monoids on finite words, or extensions to trees and infinite objects, are fundamental tools of logic in computer science. There are tens of models in the literature, ranging from finite automata on finite words to weighted automata on infinite trees. Many existing finite-state models share important similarities, like existence of canonical (minimal) devices, or decidability of emptiness, or a logic-automata connection. The first and primary goal of this project is to systematically investigate these similarities, and create a unified theory of finite-state devices, which:
1. covers the whole spectrum of existing finite-state devices, including settings with diverse inputs (e.g. words and trees, or infinite inputs, or infinite alphabets) and diverse outputs (e.g. Boolean like in the classical automata, or numbers like in weighted automata); and
2. sheds light on the correct notion of finite-state device in settings where there is no universally accepted choice or where finite-state devices have not been considered at all.
The theory of finite-state devices is one of those fields of theory where even the more advanced results have natural potential for applications. It is surprising and sad how little of this potential is normally realised, with most existing software using only the most rudimentary theoretical techniques. The second goal of the project is to create two tools which use more advanced aspects of the theory of automata to solve simple problems of wide applicability (i.e. at least tens of thousands of users):
1. a system that automatically grades exercises in automata, which goes beyond simple testing, and forces the students to write proofs
2. a system that uses learning to synthesise text transformations (such a search-and-replace, but also more powerful ones) by using examples
Summary
Finite-state devices like finite automata and monoids on finite words, or extensions to trees and infinite objects, are fundamental tools of logic in computer science. There are tens of models in the literature, ranging from finite automata on finite words to weighted automata on infinite trees. Many existing finite-state models share important similarities, like existence of canonical (minimal) devices, or decidability of emptiness, or a logic-automata connection. The first and primary goal of this project is to systematically investigate these similarities, and create a unified theory of finite-state devices, which:
1. covers the whole spectrum of existing finite-state devices, including settings with diverse inputs (e.g. words and trees, or infinite inputs, or infinite alphabets) and diverse outputs (e.g. Boolean like in the classical automata, or numbers like in weighted automata); and
2. sheds light on the correct notion of finite-state device in settings where there is no universally accepted choice or where finite-state devices have not been considered at all.
The theory of finite-state devices is one of those fields of theory where even the more advanced results have natural potential for applications. It is surprising and sad how little of this potential is normally realised, with most existing software using only the most rudimentary theoretical techniques. The second goal of the project is to create two tools which use more advanced aspects of the theory of automata to solve simple problems of wide applicability (i.e. at least tens of thousands of users):
1. a system that automatically grades exercises in automata, which goes beyond simple testing, and forces the students to write proofs
2. a system that uses learning to synthesise text transformations (such a search-and-replace, but also more powerful ones) by using examples
Max ERC Funding
1 768 125 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym Load Slice Core
Project Load Slice Core: A Power and Cost-Efficient Microarchitecture for the Future
Researcher (PI) Lieven Eeckhout
Host Institution (HI) UNIVERSITEIT GENT
Call Details Advanced Grant (AdG), PE6, ERC-2016-ADG
Summary The ideal processor building block is a power and cost-efficient core that can maximize the extraction of memory hierarchy parallelism, a combination that neither traditional in-order nor out-of-order cores provide. We propose the Load Slice Core microarchitecture, a restricted out-of-order engine aimed squarely at extracting memory hierarchy parallelism, which, according to preliminary results, delivers a nearly 8 times higher performance per Watt per euro compared to an out-of-order core.
The overarching objective of this project to fully determine the potential of the Load Slice Core as a key building block for a novel multi-core processor architecture needed in light of both current and future challenges in software and hardware, including variable thread-level parallelism, managed language workloads, the importance of sequential performance, and the quest for significantly improved power and cost efficiency.
We anticipate significant improvement in multi-core performance within the available power budget and cost by combining chip-level dynamism to cope with variable thread-level parallelism along with the inherent power- and cost-efficient Load Slice Core design. If we are able to demonstrate the true value and potential of the Load Slice Core to address future hardware and software challenges, this project will have a long-lasting impact on the microprocessor industry moving forward.
Summary
The ideal processor building block is a power and cost-efficient core that can maximize the extraction of memory hierarchy parallelism, a combination that neither traditional in-order nor out-of-order cores provide. We propose the Load Slice Core microarchitecture, a restricted out-of-order engine aimed squarely at extracting memory hierarchy parallelism, which, according to preliminary results, delivers a nearly 8 times higher performance per Watt per euro compared to an out-of-order core.
The overarching objective of this project to fully determine the potential of the Load Slice Core as a key building block for a novel multi-core processor architecture needed in light of both current and future challenges in software and hardware, including variable thread-level parallelism, managed language workloads, the importance of sequential performance, and the quest for significantly improved power and cost efficiency.
We anticipate significant improvement in multi-core performance within the available power budget and cost by combining chip-level dynamism to cope with variable thread-level parallelism along with the inherent power- and cost-efficient Load Slice Core design. If we are able to demonstrate the true value and potential of the Load Slice Core to address future hardware and software challenges, this project will have a long-lasting impact on the microprocessor industry moving forward.
Max ERC Funding
2 499 500 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym LOPRE
Project Lossy Preprocessing
Researcher (PI) Saket SAURABH
Host Institution (HI) UNIVERSITETET I BERGEN
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary A critical component of computational processing of data sets is the `preprocessing' or `compression' step which is the computation of a \emph{succinct, sufficiently accurate} representation
of the given data. Preprocessing is ubiquitous and a rigorous mathematical understanding of preprocessing algorithms is crucial in order to reason about and understand the limits of preprocessing.
Unfortunately, there is no mathematical framework to analyze and objectively compare two preprocessing routines while simultaneously taking into account `all three dimensions' --
-- the efficiency of computing the succinct representation,
-- the space required to store this representation, and
-- the accuracy with which the original data is captured in the succinct representation.
``The overarching goal of this proposal is the development of a mathematical framework for the rigorous analysis of preprocessing algorithms. ''
We will achieve the goal by designing new algorithmic techniques for preprocessing, developing a framework of analysis to make qualitative comparisons between various preprocessing routines based on the criteria above and by developing lower bound tools required
to understand the limitations of preprocessing for concrete problems.
This project will lift our understanding of algorithmic preprocessing to new heights and lead to a groundbreaking shift in the set of basic research questions attached to the study of preprocessing for specific problems. It will significantly advance the analysis of preprocessing and yield substantial technology transfer between adjacent subfields of computer science such as dynamic algorithms, streaming algorithms, property testing and graph theory.
Summary
A critical component of computational processing of data sets is the `preprocessing' or `compression' step which is the computation of a \emph{succinct, sufficiently accurate} representation
of the given data. Preprocessing is ubiquitous and a rigorous mathematical understanding of preprocessing algorithms is crucial in order to reason about and understand the limits of preprocessing.
Unfortunately, there is no mathematical framework to analyze and objectively compare two preprocessing routines while simultaneously taking into account `all three dimensions' --
-- the efficiency of computing the succinct representation,
-- the space required to store this representation, and
-- the accuracy with which the original data is captured in the succinct representation.
``The overarching goal of this proposal is the development of a mathematical framework for the rigorous analysis of preprocessing algorithms. ''
We will achieve the goal by designing new algorithmic techniques for preprocessing, developing a framework of analysis to make qualitative comparisons between various preprocessing routines based on the criteria above and by developing lower bound tools required
to understand the limitations of preprocessing for concrete problems.
This project will lift our understanding of algorithmic preprocessing to new heights and lead to a groundbreaking shift in the set of basic research questions attached to the study of preprocessing for specific problems. It will significantly advance the analysis of preprocessing and yield substantial technology transfer between adjacent subfields of computer science such as dynamic algorithms, streaming algorithms, property testing and graph theory.
Max ERC Funding
2 000 000 €
Duration
Start date: 2019-05-01, End date: 2024-04-30
Project acronym MIGRANT
Project Mining Graphs and Networks: a Theory-based approach
Researcher (PI) Jan Ramon
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary In this project we aim at formulating enhancing theoretical foundations for the emerging field of graph mining. Graph mining is the field concerned with extracting interesting patterns and knowledge from graph or network structured data, such as can be found in chemistry, bioinformatics, the world wide web, social networks etc. Recent work has shown that many standard data mining techniques can be extended to structured data and can yield interesting results, but also that when applied to complex real-world data, these standard techniques often become computationally intractable. In this project we aim at providing a better understanding of the complexity of the tasks considered in the field of graph mining, and at proposing techniques to better exploit the properties of the data. To this aim, we will bring together insights from the fields of data mining, graph theory, learning theory and different application fields, and add our own original contributions. Key features of the methodology include the ground-breaking integration of insights from graph theory in data mining and learning approaches, the development of efficient prototype algorithms, and the interdisciplinary collaboration with application domain experts to validate the practical value of the work, This potential impact of this project is significant, as it will be the first systematic study of the theory of graph mining, it will provide foundations on which later research can build further and it will have applications in the many domains with complex data.
Summary
In this project we aim at formulating enhancing theoretical foundations for the emerging field of graph mining. Graph mining is the field concerned with extracting interesting patterns and knowledge from graph or network structured data, such as can be found in chemistry, bioinformatics, the world wide web, social networks etc. Recent work has shown that many standard data mining techniques can be extended to structured data and can yield interesting results, but also that when applied to complex real-world data, these standard techniques often become computationally intractable. In this project we aim at providing a better understanding of the complexity of the tasks considered in the field of graph mining, and at proposing techniques to better exploit the properties of the data. To this aim, we will bring together insights from the fields of data mining, graph theory, learning theory and different application fields, and add our own original contributions. Key features of the methodology include the ground-breaking integration of insights from graph theory in data mining and learning approaches, the development of efficient prototype algorithms, and the interdisciplinary collaboration with application domain experts to validate the practical value of the work, This potential impact of this project is significant, as it will be the first systematic study of the theory of graph mining, it will provide foundations on which later research can build further and it will have applications in the many domains with complex data.
Max ERC Funding
1 716 066 €
Duration
Start date: 2009-12-01, End date: 2015-05-31
Project acronym NANOFIB
Project Nano fibrous materials - structure, design and application
Researcher (PI) Christian Clasen
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE6, ERC-2007-StG
Summary The performance and physical attributes of a material and product can be tailored to so far unmatched material strengths and properties by creating new nano fibrous structures from polymers by electrospinning. The electrospinning process uses an electric field to produce charged jets of polymer solutions or melts. Bending instabilities of the jet, caused by the surface charge, lead to extremely high local extension rates of the jet and produce fibres with diameters of the order of a few nanometer that consist of highly aligned polymer strands. However, the biggest unsolved problem of the electrospinning process is the sensitive equilibrium between surface tension, viscosity, elasticity and conductivity of the polymer solutions. These are controlled by molecular parameters as the molar mass, chemical microstructure, conformation in solution or supramolecular structures via intermolecular interactions. The optimal combination of these parameters is, as yet, unknown. Within this project, a novel and unique technical platform will be developed and installed, that is generally capable to image and analyse high speed free surface flows in miniaturised dimensions. This platform will then be utilized to analyse electrospinning process parameters and to connect them to the material properties and the molecular structure of the polymer solution. Only such a fundamental understanding of the relation of these properties to the flow and mass transfer phenomena on the micro-time and -dimensional scale will allow to design in the second part of this project the required structural and material properties of nano-scale fibres for: -novel fibre/matrix composites for the creation of ultra-high-strength hydrogel membranes; -short fibre morphologies created by a novel controlled disruptive spinning process at the boundaries of the parameter space; -tailoring of fibre properties from renewable resources by modification of the chemical side-chain structure of polysaccharides.
Summary
The performance and physical attributes of a material and product can be tailored to so far unmatched material strengths and properties by creating new nano fibrous structures from polymers by electrospinning. The electrospinning process uses an electric field to produce charged jets of polymer solutions or melts. Bending instabilities of the jet, caused by the surface charge, lead to extremely high local extension rates of the jet and produce fibres with diameters of the order of a few nanometer that consist of highly aligned polymer strands. However, the biggest unsolved problem of the electrospinning process is the sensitive equilibrium between surface tension, viscosity, elasticity and conductivity of the polymer solutions. These are controlled by molecular parameters as the molar mass, chemical microstructure, conformation in solution or supramolecular structures via intermolecular interactions. The optimal combination of these parameters is, as yet, unknown. Within this project, a novel and unique technical platform will be developed and installed, that is generally capable to image and analyse high speed free surface flows in miniaturised dimensions. This platform will then be utilized to analyse electrospinning process parameters and to connect them to the material properties and the molecular structure of the polymer solution. Only such a fundamental understanding of the relation of these properties to the flow and mass transfer phenomena on the micro-time and -dimensional scale will allow to design in the second part of this project the required structural and material properties of nano-scale fibres for: -novel fibre/matrix composites for the creation of ultra-high-strength hydrogel membranes; -short fibre morphologies created by a novel controlled disruptive spinning process at the boundaries of the parameter space; -tailoring of fibre properties from renewable resources by modification of the chemical side-chain structure of polysaccharides.
Max ERC Funding
1 228 736 €
Duration
Start date: 2008-07-01, End date: 2013-06-30
Project acronym PAAL
Project Practical Approximation Algorithms
Researcher (PI) Piotr Sankowski
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Call Details Starting Grant (StG), PE6, ERC-2010-StG_20091028
Summary The goal of this proposal is the development and study of practical approximation algorithms. We will base our study on
theoretical models that can describe requirements for algorithms that make them practically efficient. We plan to develop an
efficient and useful programming library of approximation algorithms.
Our research on approximation algorithms will be concentrated on two main topics:
- multi-problem optimization, when the solution has to be composed out of different problems that need to interact,
- interplay between regular and random structure of network that could allow construction of good approximation algorithms.
The above concepts try to capture the notion of effective algorithms. It has to be underlined that they were not studied before.
The practical importance of these problems will be verified by the accompanying work on generic programming concepts
for approximation algorithms. These concepts will form the basis of universal library that will include Web algorithms and
algorithms for physical applications.
Summary
The goal of this proposal is the development and study of practical approximation algorithms. We will base our study on
theoretical models that can describe requirements for algorithms that make them practically efficient. We plan to develop an
efficient and useful programming library of approximation algorithms.
Our research on approximation algorithms will be concentrated on two main topics:
- multi-problem optimization, when the solution has to be composed out of different problems that need to interact,
- interplay between regular and random structure of network that could allow construction of good approximation algorithms.
The above concepts try to capture the notion of effective algorithms. It has to be underlined that they were not studied before.
The practical importance of these problems will be verified by the accompanying work on generic programming concepts
for approximation algorithms. These concepts will form the basis of universal library that will include Web algorithms and
algorithms for physical applications.
Max ERC Funding
1 000 000 €
Duration
Start date: 2010-11-01, End date: 2015-10-31