Project acronym ACO
Project The Proceedings of the Ecumenical Councils from Oral Utterance to Manuscript Edition as Evidence for Late Antique Persuasion and Self-Representation Techniques
Researcher (PI) Peter Alfred Riedlberger
Host Institution (HI) OTTO-FRIEDRICH-UNIVERSITAET BAMBERG
Call Details Starting Grant (StG), SH5, ERC-2015-STG
Summary The Acts of the Ecumenical Councils of Late Antiquity include (purportedly) verbatim minutes of the proceedings, a formal framework and copies of relevant documents which were either (allegedly) read out during the proceedings or which were later attached to the Acts proper. Despite this unusual wealth of documentary evidence, the daunting nature of the Acts demanding multidisciplinary competency, their complex structure with a matryoshka-like nesting of proceedings from different dates, and the stereotype that their contents bear only on Christological niceties have deterred generations of historians from studying them. Only in recent years have their fortunes begun to improve, but this recent research has not always been based on sound principles: the recorded proceedings of the sessions are still often accepted as verbatim minutes. Yet even a superficial reading quickly reveals widespread editorial interference. We must accept that in many cases the Acts will teach us less about the actual debates than about the editors who shaped their presentation. This does not depreciate the Acts’ evidence: on the contrary, they are first-rate material for the rhetoric of persuasion and self-representation. It is possible, in fact, to take the investigation to a deeper level and examine in what manner the oral proceedings were put into writing: several passages in the Acts comment upon the process of note-taking and the work of the shorthand writers. Thus, the main objective of the proposed research project could be described as an attempt to trace the destinies of the Acts’ texts, from the oral utterance to the manuscript texts we have today. This will include the fullest study on ancient transcript techniques to date; a structural analysis of the Acts’ texts with the aim of highlighting edited passages; and a careful comparison of the various editions of the Acts, which survive in Greek, Latin, Syriac and Coptic, in order to detect traces of editorial interference.
Summary
The Acts of the Ecumenical Councils of Late Antiquity include (purportedly) verbatim minutes of the proceedings, a formal framework and copies of relevant documents which were either (allegedly) read out during the proceedings or which were later attached to the Acts proper. Despite this unusual wealth of documentary evidence, the daunting nature of the Acts demanding multidisciplinary competency, their complex structure with a matryoshka-like nesting of proceedings from different dates, and the stereotype that their contents bear only on Christological niceties have deterred generations of historians from studying them. Only in recent years have their fortunes begun to improve, but this recent research has not always been based on sound principles: the recorded proceedings of the sessions are still often accepted as verbatim minutes. Yet even a superficial reading quickly reveals widespread editorial interference. We must accept that in many cases the Acts will teach us less about the actual debates than about the editors who shaped their presentation. This does not depreciate the Acts’ evidence: on the contrary, they are first-rate material for the rhetoric of persuasion and self-representation. It is possible, in fact, to take the investigation to a deeper level and examine in what manner the oral proceedings were put into writing: several passages in the Acts comment upon the process of note-taking and the work of the shorthand writers. Thus, the main objective of the proposed research project could be described as an attempt to trace the destinies of the Acts’ texts, from the oral utterance to the manuscript texts we have today. This will include the fullest study on ancient transcript techniques to date; a structural analysis of the Acts’ texts with the aim of highlighting edited passages; and a careful comparison of the various editions of the Acts, which survive in Greek, Latin, Syriac and Coptic, in order to detect traces of editorial interference.
Max ERC Funding
1 497 250 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym ARCA
Project Analysis and Representation of Complex Activities in Videos
Researcher (PI) Juergen Gall
Host Institution (HI) RHEINISCHE FRIEDRICH-WILHELMS-UNIVERSITAT BONN
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of the project is to automatically analyse human activities observed in videos. Any solution to this problem will allow the development of novel applications. It could be used to create short videos that summarize daily activities to support patients suffering from Alzheimer's disease. It could also be used for education, e.g., by providing a video analysis for a trainee in the hospital that shows if the tasks have been correctly executed.
The analysis of complex activities in videos, however, is very challenging since activities vary in temporal duration between minutes and hours, involve interactions with several objects that change their appearance and shape, e.g., food during cooking, and are composed of many sub-activities, which can happen at the same time or in various orders.
While the majority of recent works in action recognition focuses on developing better feature encoding techniques for classifying sub-activities in short video clips of a few seconds, this project moves forward and aims to develop a higher level representation of complex activities to overcome the limitations of current approaches. This includes the handling of large time variations and the ability to recognize and locate complex activities in videos. To this end, we aim to develop a unified model that provides detailed information about the activities and sub-activities in terms of time and spatial location, as well as involved pose motion, objects and their transformations.
Another aspect of the project is to learn a representation from videos that is not tied to a specific source of videos or limited to a specific application. Instead we aim to learn a representation that is invariant to a perspective change, e.g., from a third-person perspective to an egocentric perspective, and can be applied to various modalities like videos or depth data without the need of collecting massive training data for all modalities. In other words, we aim to learn the essence of activities.
Summary
The goal of the project is to automatically analyse human activities observed in videos. Any solution to this problem will allow the development of novel applications. It could be used to create short videos that summarize daily activities to support patients suffering from Alzheimer's disease. It could also be used for education, e.g., by providing a video analysis for a trainee in the hospital that shows if the tasks have been correctly executed.
The analysis of complex activities in videos, however, is very challenging since activities vary in temporal duration between minutes and hours, involve interactions with several objects that change their appearance and shape, e.g., food during cooking, and are composed of many sub-activities, which can happen at the same time or in various orders.
While the majority of recent works in action recognition focuses on developing better feature encoding techniques for classifying sub-activities in short video clips of a few seconds, this project moves forward and aims to develop a higher level representation of complex activities to overcome the limitations of current approaches. This includes the handling of large time variations and the ability to recognize and locate complex activities in videos. To this end, we aim to develop a unified model that provides detailed information about the activities and sub-activities in terms of time and spatial location, as well as involved pose motion, objects and their transformations.
Another aspect of the project is to learn a representation from videos that is not tied to a specific source of videos or limited to a specific application. Instead we aim to learn a representation that is invariant to a perspective change, e.g., from a third-person perspective to an egocentric perspective, and can be applied to various modalities like videos or depth data without the need of collecting massive training data for all modalities. In other words, we aim to learn the essence of activities.
Max ERC Funding
1 499 875 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym ASYFAIR
Project Fair and Consistent Border Controls? A Critical, Multi-methodological and Interdisciplinary Study of Asylum Adjudication in Europe
Researcher (PI) Nicholas Mark Gill
Host Institution (HI) THE UNIVERSITY OF EXETER
Call Details Starting Grant (StG), SH3, ERC-2015-STG
Summary ‘Consistency’ is regularly cited as a desirable attribute of border control, but it has received little critical social scientific attention. This inter-disciplinary project, at the inter-face between critical human geography, border studies and law, will scrutinise the consistency of European asylum adjudication in order to develop richer theoretical understanding of this lynchpin concept. It will move beyond the administrative legal concepts of substantive and procedural consistency by advancing a three-fold conceptualisation of consistency – as everyday practice, discursive deployment of facts and disciplinary technique. In order to generate productive intellectual tension it will also employ an explicitly antagonistic conceptualisation of the relationship between geography and law that views law as seeking to constrain and systematise lived space. The project will employ an innovative combination of methodologies that will produce unique and rich data sets including quantitative analysis, multi-sited legal ethnography, discourse analysis and interviews, and the findings are likely to be of interest both to academic communities like geographers, legal and border scholars and to policy makers and activists working in border control settings. In 2013 the Common European Asylum System (CEAS) was launched to standardise the procedures of asylum determination. But as yet no sustained multi-methodological assessment of the claims of consistency inherent to the CEAS has been carried out. This project offers not only the opportunity to assess progress towards harmonisation of asylum determination processes in Europe, but will also provide a new conceptual framework with which to approach the dilemmas and risks of inconsistency in an area of law fraught with political controversy and uncertainty around the world. Most fundamentally, the project promises to debunk the myths surrounding the possibility of fair and consistent border controls in Europe and elsewhere.
Summary
‘Consistency’ is regularly cited as a desirable attribute of border control, but it has received little critical social scientific attention. This inter-disciplinary project, at the inter-face between critical human geography, border studies and law, will scrutinise the consistency of European asylum adjudication in order to develop richer theoretical understanding of this lynchpin concept. It will move beyond the administrative legal concepts of substantive and procedural consistency by advancing a three-fold conceptualisation of consistency – as everyday practice, discursive deployment of facts and disciplinary technique. In order to generate productive intellectual tension it will also employ an explicitly antagonistic conceptualisation of the relationship between geography and law that views law as seeking to constrain and systematise lived space. The project will employ an innovative combination of methodologies that will produce unique and rich data sets including quantitative analysis, multi-sited legal ethnography, discourse analysis and interviews, and the findings are likely to be of interest both to academic communities like geographers, legal and border scholars and to policy makers and activists working in border control settings. In 2013 the Common European Asylum System (CEAS) was launched to standardise the procedures of asylum determination. But as yet no sustained multi-methodological assessment of the claims of consistency inherent to the CEAS has been carried out. This project offers not only the opportunity to assess progress towards harmonisation of asylum determination processes in Europe, but will also provide a new conceptual framework with which to approach the dilemmas and risks of inconsistency in an area of law fraught with political controversy and uncertainty around the world. Most fundamentally, the project promises to debunk the myths surrounding the possibility of fair and consistent border controls in Europe and elsewhere.
Max ERC Funding
1 252 067 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym BIGCODE
Project Learning from Big Code: Probabilistic Models, Analysis and Synthesis
Researcher (PI) Martin Vechev
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Summary
The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym BroadSem
Project Induction of Broad-Coverage Semantic Parsers
Researcher (PI) Ivan Titov
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary In the last one or two decades, language technology has achieved a number of important successes, for example, producing functional machine translation systems and beating humans in quiz games. The key bottleneck which prevents further progress in these and many other natural language processing (NLP) applications (e.g., text summarization, information retrieval, opinion mining, dialog and tutoring systems) is the lack of accurate methods for producing meaning representations of texts. Accurately predicting such meaning representations on an open domain with an automatic parser is a challenging and unsolved problem, primarily because of language variability and ambiguity. The reason for the unsatisfactory performance is reliance on supervised learning (learning from annotated resources), with the amounts of annotation required for accurate open-domain parsing exceeding what is practically feasible. Moreover, representations defined in these resources typically do not provide abstractions suitable for reasoning.
In this project, we will induce semantic representations from large amounts of unannotated data (i.e. text which has not been labeled by humans) while guided by information contained in human-annotated data and other forms of linguistic knowledge. This will allow us to scale our approach to many domains and across languages. We will specialize meaning representations for reasoning by modeling relations (e.g., facts) appearing across sentences in texts (document-level modeling), across different texts, and across texts and knowledge bases. Learning to predict this linked data is closely related to learning to reason, including learning the notions of semantic equivalence and entailment. We will jointly induce semantic parsers (e.g., log-linear feature-rich models) and reasoning models (latent factor models) relying on this data, thus, ensuring that the semantic representations are informative for applications requiring reasoning.
Summary
In the last one or two decades, language technology has achieved a number of important successes, for example, producing functional machine translation systems and beating humans in quiz games. The key bottleneck which prevents further progress in these and many other natural language processing (NLP) applications (e.g., text summarization, information retrieval, opinion mining, dialog and tutoring systems) is the lack of accurate methods for producing meaning representations of texts. Accurately predicting such meaning representations on an open domain with an automatic parser is a challenging and unsolved problem, primarily because of language variability and ambiguity. The reason for the unsatisfactory performance is reliance on supervised learning (learning from annotated resources), with the amounts of annotation required for accurate open-domain parsing exceeding what is practically feasible. Moreover, representations defined in these resources typically do not provide abstractions suitable for reasoning.
In this project, we will induce semantic representations from large amounts of unannotated data (i.e. text which has not been labeled by humans) while guided by information contained in human-annotated data and other forms of linguistic knowledge. This will allow us to scale our approach to many domains and across languages. We will specialize meaning representations for reasoning by modeling relations (e.g., facts) appearing across sentences in texts (document-level modeling), across different texts, and across texts and knowledge bases. Learning to predict this linked data is closely related to learning to reason, including learning the notions of semantic equivalence and entailment. We will jointly induce semantic parsers (e.g., log-linear feature-rich models) and reasoning models (latent factor models) relying on this data, thus, ensuring that the semantic representations are informative for applications requiring reasoning.
Max ERC Funding
1 457 185 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym BUMP
Project BETTER UNDERSTANDING the METAPHYSICS of PREGNANCY
Researcher (PI) Elisabeth Marjolijn Kingma
Host Institution (HI) UNIVERSITY OF SOUTHAMPTON
Call Details Starting Grant (StG), SH5, ERC-2015-STG
Summary Every single human is the product of a pregnancy: an approximately nine-month period during which a foetus develops within its mother’s body. Yet pregnancy has not been a traditional focus in philosophy. That is remarkable, for two reasons:
First, because pregnancy presents fascinating philosophical problems: what, during the pregnancy, is the nature of the relationship between the foetus and the maternal organism? What is the relationship between the pregnant organism and the later baby? And when does one person or organism become two?
Second, because so many topics immediately adjacent to or involved in pregnancy have taken centre stage in philosophical enquiry. Examples include questions about personhood, foetuses, personal identity and the self.
This project launches the metaphysics of pregnancy as an important and fundamental area of philosophical research.
The core aims of the project are:
(1) to develop a philosophically sophisticated account of human pregnancy and birth, and the entities involved in this, that is attentive to our best empirical understanding of human reproductive biology;
(2) to articulate the metaphysics of organisms, persons and selves in a way that acknowledges the details of how we come into existence; and
(3) to start the process of rewriting the legal, social and moral language we use to classify ourselves and our actions, so that it is compatible with and can accommodate the nature of pregnancy.
The project will investigate these questions in the context of a range of philosophical sub disciplines, including analytic metaphysics, philosophy of biology and feminist philosophy, and in close dialogue with our best empirical understanding of the life sciences – most notably physiology.
Summary
Every single human is the product of a pregnancy: an approximately nine-month period during which a foetus develops within its mother’s body. Yet pregnancy has not been a traditional focus in philosophy. That is remarkable, for two reasons:
First, because pregnancy presents fascinating philosophical problems: what, during the pregnancy, is the nature of the relationship between the foetus and the maternal organism? What is the relationship between the pregnant organism and the later baby? And when does one person or organism become two?
Second, because so many topics immediately adjacent to or involved in pregnancy have taken centre stage in philosophical enquiry. Examples include questions about personhood, foetuses, personal identity and the self.
This project launches the metaphysics of pregnancy as an important and fundamental area of philosophical research.
The core aims of the project are:
(1) to develop a philosophically sophisticated account of human pregnancy and birth, and the entities involved in this, that is attentive to our best empirical understanding of human reproductive biology;
(2) to articulate the metaphysics of organisms, persons and selves in a way that acknowledges the details of how we come into existence; and
(3) to start the process of rewriting the legal, social and moral language we use to classify ourselves and our actions, so that it is compatible with and can accommodate the nature of pregnancy.
The project will investigate these questions in the context of a range of philosophical sub disciplines, including analytic metaphysics, philosophy of biology and feminist philosophy, and in close dialogue with our best empirical understanding of the life sciences – most notably physiology.
Max ERC Funding
1 273 290 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym BUNGEE-TOOLS
Project Building Next-Generation Computational Tools for High Resolution Neuroimaging Studies
Researcher (PI) Juan Eugenio Iglesias
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary Recent advances in magnetic resonance (MR) acquisition technology are providing us with images of the human brain of increasing detail and resolution. While these images hold promise to greatly increase our understanding of such a complex organ, the neuroimaging community relies on tools (e.g. SPM, FSL, FreeSurfer) which, being over a decade old, were designed to work at much lower resolutions. These tools do not consider brain substructures that are visible in present-day scans, and this inability to capitalize on the vast improvement of MR is hampering progress in the neuroimaging field.
In this ambitious project, which lies at the nexus of medical histology, neuroscience, biomedical imaging, computer vision and statistics, we propose to build a set of next-generation computational tools that will enable neuroimaging studies to take full advantage of the increased resolution of modern MR technology. The core of the tools will be an ultra-high resolution probabilistic atlas of the human brain, built upon multimodal data combining from histology and ex vivo MR. The resulting atlas will be used to analyze in vivo brain MR scans, which will require the development of Bayesian segmentation methods beyond the state of the art.
The developed tools, which will be made freely available to the scientific community, will enable the analysis of MR data at a superior level of structural detail, opening completely new opportunities of research in neuroscience. Therefore, we expect the tools to have a tremendous impact on the quest to understand the human brain (in health and in disease), and ultimately on public health and the economy.
Summary
Recent advances in magnetic resonance (MR) acquisition technology are providing us with images of the human brain of increasing detail and resolution. While these images hold promise to greatly increase our understanding of such a complex organ, the neuroimaging community relies on tools (e.g. SPM, FSL, FreeSurfer) which, being over a decade old, were designed to work at much lower resolutions. These tools do not consider brain substructures that are visible in present-day scans, and this inability to capitalize on the vast improvement of MR is hampering progress in the neuroimaging field.
In this ambitious project, which lies at the nexus of medical histology, neuroscience, biomedical imaging, computer vision and statistics, we propose to build a set of next-generation computational tools that will enable neuroimaging studies to take full advantage of the increased resolution of modern MR technology. The core of the tools will be an ultra-high resolution probabilistic atlas of the human brain, built upon multimodal data combining from histology and ex vivo MR. The resulting atlas will be used to analyze in vivo brain MR scans, which will require the development of Bayesian segmentation methods beyond the state of the art.
The developed tools, which will be made freely available to the scientific community, will enable the analysis of MR data at a superior level of structural detail, opening completely new opportunities of research in neuroscience. Therefore, we expect the tools to have a tremendous impact on the quest to understand the human brain (in health and in disease), and ultimately on public health and the economy.
Max ERC Funding
1 450 075 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym CLOCK
Project CLIMATE ADAPTATION TO SHIFTING STOCKS
Researcher (PI) Elena Ojea
Host Institution (HI) UNIVERSIDAD DE VIGO
Call Details Starting Grant (StG), SH3, ERC-2015-STG
Summary Management of marine fisheries is still far from incorporating adaptation to climate change, even though global stocks are heavily overexploited and climate change is adding additional pressure to the resource. In fact, there is growing evidence that current fisheries management systems may no longer be effective under climate change, and this will translate into both ecological and socioeconomic impacts. This research project argues that the combination of fisheries management science and socio-ecological systems thinking is necessary in order to advance in fisheries adaptation to climate change. To this end, the main objectives are set to: 1) Identify and understand the new challenges raised by climate change for current sustainable fisheries management; 2) Develop a novel approach to fisheries adaptation within a socio-ecological framework; 3) Provide empirical evidence on potential solutions for the adaptation of fisheries management systems; and 4) Help introduce fisheries adaptation at the top of the regional and international adaptation policy agendas. To do this, I will combine model and simulation approaches to fisheries with specific case studies where both biophysical and economic variables will be studied an modelled, but also individuals will be given the opportunity to participate in an active way, learning from participatory methods their preferences towards adaptation and the consequences of the new scenarios climate change poses. Three potential case studies are identified for property rights over stocks, property rights over space, and Marine Reserves in two European and one international case study areas. As a result, I expect to develop a new Adaptation Framework for fisheries management that can be scalable, transferable and easily operationalized, and a set of case study examples on how to integrate theory and participatory processes with the aim of increasing social, ecological and institutional resilience to climate change.
Summary
Management of marine fisheries is still far from incorporating adaptation to climate change, even though global stocks are heavily overexploited and climate change is adding additional pressure to the resource. In fact, there is growing evidence that current fisheries management systems may no longer be effective under climate change, and this will translate into both ecological and socioeconomic impacts. This research project argues that the combination of fisheries management science and socio-ecological systems thinking is necessary in order to advance in fisheries adaptation to climate change. To this end, the main objectives are set to: 1) Identify and understand the new challenges raised by climate change for current sustainable fisheries management; 2) Develop a novel approach to fisheries adaptation within a socio-ecological framework; 3) Provide empirical evidence on potential solutions for the adaptation of fisheries management systems; and 4) Help introduce fisheries adaptation at the top of the regional and international adaptation policy agendas. To do this, I will combine model and simulation approaches to fisheries with specific case studies where both biophysical and economic variables will be studied an modelled, but also individuals will be given the opportunity to participate in an active way, learning from participatory methods their preferences towards adaptation and the consequences of the new scenarios climate change poses. Three potential case studies are identified for property rights over stocks, property rights over space, and Marine Reserves in two European and one international case study areas. As a result, I expect to develop a new Adaptation Framework for fisheries management that can be scalable, transferable and easily operationalized, and a set of case study examples on how to integrate theory and participatory processes with the aim of increasing social, ecological and institutional resilience to climate change.
Max ERC Funding
1 184 931 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym COLORAMAP
Project Constrained Low-Rank Matrix Approximations: Theoretical and Algorithmic Developments for Practitioners
Researcher (PI) Nicolas Benoit P Gillis
Host Institution (HI) UNIVERSITE DE MONS
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary Low-rank matrix approximation (LRA) techniques such as principal component analysis (PCA) are powerful tools for the representation and analysis of high dimensional data, and are used in a wide variety of areas such as machine learning, signal and image processing, data mining, and optimization. Without any constraints and using the least squares error, LRA can be solved via the singular value decomposition. However, in practice, this model is often not suitable mainly because (i) the data might be contaminated with outliers, missing data and non-Gaussian noise, and (ii) the low-rank factors of the decomposition might have to satisfy some specific constraints. Hence, in recent years, many variants of LRA have been introduced, using different constraints on the factors and using different objective functions to assess the quality of the approximation; e.g., sparse PCA, PCA with missing data, independent component analysis and nonnegative matrix factorization. Although these new constrained LRA models have become very popular and standard in some fields, there is still a significant gap between theory and practice. In this project, our goal is to reduce this gap by attacking the problem in an integrated way making connections between LRA variants, and by using four very different but complementary perspectives: (1) computational complexity issues, (2) provably correct algorithms, (3) heuristics for difficult instances, and (4) application-oriented aspects. This unified and multi-disciplinary approach will enable us to understand these problems better, to develop and analyze new and existing algorithms and to then use them for applications. Our ultimate goal is to provide practitioners with new tools and to allow them to decide which method to use in which situation and to know what to expect from it.
Summary
Low-rank matrix approximation (LRA) techniques such as principal component analysis (PCA) are powerful tools for the representation and analysis of high dimensional data, and are used in a wide variety of areas such as machine learning, signal and image processing, data mining, and optimization. Without any constraints and using the least squares error, LRA can be solved via the singular value decomposition. However, in practice, this model is often not suitable mainly because (i) the data might be contaminated with outliers, missing data and non-Gaussian noise, and (ii) the low-rank factors of the decomposition might have to satisfy some specific constraints. Hence, in recent years, many variants of LRA have been introduced, using different constraints on the factors and using different objective functions to assess the quality of the approximation; e.g., sparse PCA, PCA with missing data, independent component analysis and nonnegative matrix factorization. Although these new constrained LRA models have become very popular and standard in some fields, there is still a significant gap between theory and practice. In this project, our goal is to reduce this gap by attacking the problem in an integrated way making connections between LRA variants, and by using four very different but complementary perspectives: (1) computational complexity issues, (2) provably correct algorithms, (3) heuristics for difficult instances, and (4) application-oriented aspects. This unified and multi-disciplinary approach will enable us to understand these problems better, to develop and analyze new and existing algorithms and to then use them for applications. Our ultimate goal is to provide practitioners with new tools and to allow them to decide which method to use in which situation and to know what to expect from it.
Max ERC Funding
1 291 750 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym CombiCompGeom
Project Combinatorial Aspects of Computational Geometry
Researcher (PI) Natan Rubin
Host Institution (HI) BEN-GURION UNIVERSITY OF THE NEGEV
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The project focuses on the interface between computational and combinatorial geometry.
Geometric problems emerge in a variety of computational fields that interact with the physical world.
The performance of geometric algorithms is determined by the description complexity of their underlying combinatorial structures. Hence, most theoretical challenges faced by computational geometry are of a distinctly combinatorial nature.
In the past two decades, computational geometry has been revolutionized by the powerful combination of random sampling techniques with the abstract machinery of geometric arrangements. These insights were used, in turn, to establish state-of-the-art results in combinatorial geometry. Nevertheless, a number of fundamental problems remained open and resisted numerous attempts to solve them.
Motivated by the recent breakthrough results, in which the PI played a central role, we propose two exciting lines of study with the potential to change the landscape of this field.
The first research direction concerns the complexity of Voronoi diagrams -- arguably the most common structures in computational geometry.
The second direction concerns combinatorial and algorithmic aspects of geometric intersection structures, including some fundamental open problems in geometric transversal theory. Many of these questions are motivated by geometric variants of general covering and packing problems, and all efficient approximation schemes for them must rely on the intrinsic properties of geometric graphs and hypergraphs.
Any progress in responding to these challenges will constitute a major breakthrough in both computational and combinatorial geometry.
Summary
The project focuses on the interface between computational and combinatorial geometry.
Geometric problems emerge in a variety of computational fields that interact with the physical world.
The performance of geometric algorithms is determined by the description complexity of their underlying combinatorial structures. Hence, most theoretical challenges faced by computational geometry are of a distinctly combinatorial nature.
In the past two decades, computational geometry has been revolutionized by the powerful combination of random sampling techniques with the abstract machinery of geometric arrangements. These insights were used, in turn, to establish state-of-the-art results in combinatorial geometry. Nevertheless, a number of fundamental problems remained open and resisted numerous attempts to solve them.
Motivated by the recent breakthrough results, in which the PI played a central role, we propose two exciting lines of study with the potential to change the landscape of this field.
The first research direction concerns the complexity of Voronoi diagrams -- arguably the most common structures in computational geometry.
The second direction concerns combinatorial and algorithmic aspects of geometric intersection structures, including some fundamental open problems in geometric transversal theory. Many of these questions are motivated by geometric variants of general covering and packing problems, and all efficient approximation schemes for them must rely on the intrinsic properties of geometric graphs and hypergraphs.
Any progress in responding to these challenges will constitute a major breakthrough in both computational and combinatorial geometry.
Max ERC Funding
1 303 750 €
Duration
Start date: 2016-09-01, End date: 2021-08-31