Project acronym ARTICULATE
Project A Reduction Theory for Codes and Lattices in Cryptography
Researcher (PI) Leo DUCAS
Host Institution (HI) STICHTING NEDERLANDSE WETENSCHAPPELIJK ONDERZOEK INSTITUTEN
Country Netherlands
Call Details Starting Grant (StG), PE6, ERC-2020-STG
Summary Codes and lattices are two major mathematical platforms to build quantum-resistant cryptosystems. Theose cryptosystems will soon be standardized and deployed, replacing historical solutions threatened by Shor's quantum algorithm.
These new cryptosystems rely on the hardness of finding short vectors in a code or in a lattice, a computational problem called reduction. Our confidence in their security relies on the relentless effort of cryptanalysis: quantifying, elucidating and trying to invalidate such hardness assumptions.
*My ERC project aims at discovering faster cryptanalytic algorithms and at building better cryptography via the development of a unified reduction theory enabling a systematic transfer of techniques between codes and lattices.* As an underpinning example for this project, I have successfully transferred the seminal LLL reduction algorithm to binary codes, an algorithm still playing a central role in the cryptanalysis of lattices.
This unified theory will also lead to better mathematical and algorithmic abstractions, improve the clarity, generality, and composability of known techniques. Such qualitative enhancements will also help to obtain quantitative ones. In turn, I will implement these enhancements into open-source libraries, designed for either high-performance or ease-of-use, in order to stimulate further algorithmic exploration by the community.
Beyond algorithms, my approach will also create new connections between existing mathematical theories and the hardness of code and lattice problems. For instance, it leads to consider moduli spaces as a new powerful tool for proving average-case hardness.
By its contributions to both the theoretical and practical aspects of the hardness of lattice and code problems, this project will play a key role in the ongoing transition to quantum-resistant cryptography.
Summary
Codes and lattices are two major mathematical platforms to build quantum-resistant cryptosystems. Theose cryptosystems will soon be standardized and deployed, replacing historical solutions threatened by Shor's quantum algorithm.
These new cryptosystems rely on the hardness of finding short vectors in a code or in a lattice, a computational problem called reduction. Our confidence in their security relies on the relentless effort of cryptanalysis: quantifying, elucidating and trying to invalidate such hardness assumptions.
*My ERC project aims at discovering faster cryptanalytic algorithms and at building better cryptography via the development of a unified reduction theory enabling a systematic transfer of techniques between codes and lattices.* As an underpinning example for this project, I have successfully transferred the seminal LLL reduction algorithm to binary codes, an algorithm still playing a central role in the cryptanalysis of lattices.
This unified theory will also lead to better mathematical and algorithmic abstractions, improve the clarity, generality, and composability of known techniques. Such qualitative enhancements will also help to obtain quantitative ones. In turn, I will implement these enhancements into open-source libraries, designed for either high-performance or ease-of-use, in order to stimulate further algorithmic exploration by the community.
Beyond algorithms, my approach will also create new connections between existing mathematical theories and the hardness of code and lattice problems. For instance, it leads to consider moduli spaces as a new powerful tool for proving average-case hardness.
By its contributions to both the theoretical and practical aspects of the hardness of lattice and code problems, this project will play a key role in the ongoing transition to quantum-resistant cryptography.
Max ERC Funding
1 497 888 €
Duration
Start date: 2021-01-01, End date: 2025-12-31
Project acronym ARTIST
Project Automated Reasoning with Theories and Induction for Software Technology
Researcher (PI) Laura KOVACS
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Country Austria
Call Details Consolidator Grant (CoG), PE6, ERC-2020-COG
Summary The long list of software failures over the past years calls for serious concerns in our digital society, creating bad reputation and adding huge economic burden on organizations, industries and governments. Improving software reliability is no more enough, ensuring software reliability is mandatory. Our project complements other advances in the area and addresses this demand by turning first-order theorem proving into an alternative, yet powerful approach to ensuring software reliability,
Saturation-based proof search is the leading technology for automated first-order theorem proving. The high-gain/high-risk aspect of our project comes from the development and use of saturation-based theorem proving as a unifying framework to reason about software technologies. We use first-order theorem proving methods not only to prove, but also to generate software properties that imply the absence of program errors at intermediate program steps.
Generating and proving program properties call for new methods supporting reasoning with both theories and quantifiers. Our project extends saturation-based first-order theorem provers with domain-specific inference rules to keep reasoning efficient. This includes commonly used theories in software development, such as the theories of integers, arrays and inductively defined data types, and automation of induction within saturation-based theorem proving, contributing to the ultimate goal of generating and proving inductive software properties, such as invariants.
Thanks to the full automation of our project, our results can be integrated and used in other frameworks, to allow end-users and developers of software technologies to gain from theorem proving without the need of becoming experts of it.
Summary
The long list of software failures over the past years calls for serious concerns in our digital society, creating bad reputation and adding huge economic burden on organizations, industries and governments. Improving software reliability is no more enough, ensuring software reliability is mandatory. Our project complements other advances in the area and addresses this demand by turning first-order theorem proving into an alternative, yet powerful approach to ensuring software reliability,
Saturation-based proof search is the leading technology for automated first-order theorem proving. The high-gain/high-risk aspect of our project comes from the development and use of saturation-based theorem proving as a unifying framework to reason about software technologies. We use first-order theorem proving methods not only to prove, but also to generate software properties that imply the absence of program errors at intermediate program steps.
Generating and proving program properties call for new methods supporting reasoning with both theories and quantifiers. Our project extends saturation-based first-order theorem provers with domain-specific inference rules to keep reasoning efficient. This includes commonly used theories in software development, such as the theories of integers, arrays and inductively defined data types, and automation of induction within saturation-based theorem proving, contributing to the ultimate goal of generating and proving inductive software properties, such as invariants.
Thanks to the full automation of our project, our results can be integrated and used in other frameworks, to allow end-users and developers of software technologies to gain from theorem proving without the need of becoming experts of it.
Max ERC Funding
2 000 000 €
Duration
Start date: 2021-07-01, End date: 2026-06-30
Project acronym AutoProbe
Project Automated Probabilistic Black-Box Verification
Researcher (PI) Alexandra SILVA
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Country United Kingdom
Call Details Consolidator Grant (CoG), PE6, ERC-2020-COG
Summary One of the longstanding challenges in Computer Science has been the development of methods and tools providing rigorous guarantees about systems’ behavior, performance, and security. There have been many successes in overcoming this challenge, notably the invention and widespread use of model checking. However, existing methods are impaired by the tension between the need of fast developing systems and the slowdown caused by the complexity of providing a model against which running systems can be verified. Automata learning – automated discovery of automata models from system observations such as test logs – is emerging as a highly effective bug-finding technique with applications in verification of bank cards and basic network communication protocols. The design of algorithms for automata learning is a fundamental research problem and in the last years much progress has been made in developing and understanding of new algorithms (including the PI’s own work). Yet, existing algorithms do not support crucial quantitative or concurrency aspects that are essential in modelling properties such as network congestion and fault-tolerance. The central objective of this project is to develop a new verification framework that enables automated model- based verification for probabilistic and concurrent systems, motivated by applications in networks. We will provide active learning algorithms, in the style of Angluin’s seminal L* algorithm, for automata models that were so far too complex to be tackled. We will base our development on rigorous semantic foundations, developed by the PI in recent years, which provide correctness for the algorithms in a modular way. The project will significantly advance model-based verification in new and previously unexplored directions. This line of research will not only result in fundamental theoretical contributions and insights in their own right but will also impact the practice of concurrent and probabilistic network verification.
Summary
One of the longstanding challenges in Computer Science has been the development of methods and tools providing rigorous guarantees about systems’ behavior, performance, and security. There have been many successes in overcoming this challenge, notably the invention and widespread use of model checking. However, existing methods are impaired by the tension between the need of fast developing systems and the slowdown caused by the complexity of providing a model against which running systems can be verified. Automata learning – automated discovery of automata models from system observations such as test logs – is emerging as a highly effective bug-finding technique with applications in verification of bank cards and basic network communication protocols. The design of algorithms for automata learning is a fundamental research problem and in the last years much progress has been made in developing and understanding of new algorithms (including the PI’s own work). Yet, existing algorithms do not support crucial quantitative or concurrency aspects that are essential in modelling properties such as network congestion and fault-tolerance. The central objective of this project is to develop a new verification framework that enables automated model- based verification for probabilistic and concurrent systems, motivated by applications in networks. We will provide active learning algorithms, in the style of Angluin’s seminal L* algorithm, for automata models that were so far too complex to be tackled. We will base our development on rigorous semantic foundations, developed by the PI in recent years, which provide correctness for the algorithms in a modular way. The project will significantly advance model-based verification in new and previously unexplored directions. This line of research will not only result in fundamental theoretical contributions and insights in their own right but will also impact the practice of concurrent and probabilistic network verification.
Max ERC Funding
2 000 000 €
Duration
Start date: 2021-04-01, End date: 2026-03-31
Project acronym BeyondMoore
Project Pioneering a New Path in Parallel Programming Beyond Moore’s Law
Researcher (PI) Didem UNAT
Host Institution (HI) KOC UNIVERSITY
Country Turkey
Call Details Starting Grant (StG), PE6, ERC-2020-STG
Summary BEYONDMOORE addresses the timely research challenge of solving the software side of the Post Moore crisis. The techno-economical model in computing, known as the Moore’s Law, has led to an exceptionally productive era for humanity and numerous scientific discoveries over the past 50+ years. However, due to the fundamental limits in chip manufacturing we are about to mark the end of Moore’s Law and enter a new era of computing where continued performance improvement will likely emerge from extreme heterogeneity. The new systems are expected to bring a diverse set of hardware accelerators and memory technologies. Current solutions to program such systems are host-centric, where the host processor orchestrates the entire execution. This poses major scalability issues and severely limits the types of parallelism that can be exploited. Unless there is a fundamental change in our approach to heterogeneous parallel programming, we risk substantially underutilizing upcoming systems. BEYONDMOORE offers a way out of this programming crisis and proposes an autonomous execution model that is more scalable, flexible, and accelerator-centric by design. In this model, accelerators have autonomy; they compute, collaborate, and communicate with each other without the involvement of the host. The execution model is powered with a rich set of programming abstractions that enable a program to be modeled as a task graph. To efficiently execute this task graph, BEYONDMOORE will develop a software framework that performs static and dynamic optimizations, issues accelerator-initiated data transfers, and reasons about parallel execution strategies that exploit both processor and memory heterogeneity. To aid the optimizations, a comprehensive cost model that characterizes both target applications and emerging architectures will be devised. Complete success of BEYONDMOORE will enable continued progress in computing which in turn will power science and technology in the life after Moore’s Law.
Summary
BEYONDMOORE addresses the timely research challenge of solving the software side of the Post Moore crisis. The techno-economical model in computing, known as the Moore’s Law, has led to an exceptionally productive era for humanity and numerous scientific discoveries over the past 50+ years. However, due to the fundamental limits in chip manufacturing we are about to mark the end of Moore’s Law and enter a new era of computing where continued performance improvement will likely emerge from extreme heterogeneity. The new systems are expected to bring a diverse set of hardware accelerators and memory technologies. Current solutions to program such systems are host-centric, where the host processor orchestrates the entire execution. This poses major scalability issues and severely limits the types of parallelism that can be exploited. Unless there is a fundamental change in our approach to heterogeneous parallel programming, we risk substantially underutilizing upcoming systems. BEYONDMOORE offers a way out of this programming crisis and proposes an autonomous execution model that is more scalable, flexible, and accelerator-centric by design. In this model, accelerators have autonomy; they compute, collaborate, and communicate with each other without the involvement of the host. The execution model is powered with a rich set of programming abstractions that enable a program to be modeled as a task graph. To efficiently execute this task graph, BEYONDMOORE will develop a software framework that performs static and dynamic optimizations, issues accelerator-initiated data transfers, and reasons about parallel execution strategies that exploit both processor and memory heterogeneity. To aid the optimizations, a comprehensive cost model that characterizes both target applications and emerging architectures will be devised. Complete success of BEYONDMOORE will enable continued progress in computing which in turn will power science and technology in the life after Moore’s Law.
Max ERC Funding
1 500 000 €
Duration
Start date: 2021-08-01, End date: 2026-07-31
Project acronym BOBR
Project Decomposition methods for discrete problems
Researcher (PI) Michal Pilipczuk
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Country Poland
Call Details Starting Grant (StG), PE6, ERC-2020-STG
Summary The main goal of the project is to radically expand our understanding of decomposition methods for discrete problems, with a particular focus on the design of parameterized and approximation algorithms on graphs. We will concentrate on four topics where we see a potential for either establishing new directions, or reaching far beyond the current state of the art.
(Beyond) Sparsity: The field of Sparsity is a rapidly developing area of graph theory that studies abstract notions of uniform sparseness in graphs and provides a wealth of tools for algorithm design. While there are still many unknowns within this field, we would like to reach beyond sparse graphs by developing a theory of well-structured dense graphs, inspired by the advances in Sparsity.
Parameterized dynamic algorithms: The idea of parameterization has so far received little attention in the field of dynamic algorithms. Our goal is to establish solid foundations for the direction of parameterized dynamic algorithms by providing dynamic variants of basic decomposition tools used in parameterized complexity.
Parameterization and approximation on planar graphs: The areas of parameterized algorithms and of approximation schemes on planar graphs share a core set of decomposition techniques and benefit from extensive cross-inspiration. We will approach several intriguing questions in this area while focusing on the idea of parameterized approximation schemes, where parameterization and approximation is explicitly combined.
Forbidding induced subgraphs: Structural graph theory offers a wealth of tools for understanding structure in graph classes characterized by forbidding induced subgraphs. This structure, while elusive and difficult to exploit, often leads to surprising tractability results. Motivated by recent advances, we propose to focus on finding general-use techniques for designing subexponential-time, approximation, and parameterized algorithms in this setting.
Summary
The main goal of the project is to radically expand our understanding of decomposition methods for discrete problems, with a particular focus on the design of parameterized and approximation algorithms on graphs. We will concentrate on four topics where we see a potential for either establishing new directions, or reaching far beyond the current state of the art.
(Beyond) Sparsity: The field of Sparsity is a rapidly developing area of graph theory that studies abstract notions of uniform sparseness in graphs and provides a wealth of tools for algorithm design. While there are still many unknowns within this field, we would like to reach beyond sparse graphs by developing a theory of well-structured dense graphs, inspired by the advances in Sparsity.
Parameterized dynamic algorithms: The idea of parameterization has so far received little attention in the field of dynamic algorithms. Our goal is to establish solid foundations for the direction of parameterized dynamic algorithms by providing dynamic variants of basic decomposition tools used in parameterized complexity.
Parameterization and approximation on planar graphs: The areas of parameterized algorithms and of approximation schemes on planar graphs share a core set of decomposition techniques and benefit from extensive cross-inspiration. We will approach several intriguing questions in this area while focusing on the idea of parameterized approximation schemes, where parameterization and approximation is explicitly combined.
Forbidding induced subgraphs: Structural graph theory offers a wealth of tools for understanding structure in graph classes characterized by forbidding induced subgraphs. This structure, while elusive and difficult to exploit, often leads to surprising tractability results. Motivated by recent advances, we propose to focus on finding general-use techniques for designing subexponential-time, approximation, and parameterized algorithms in this setting.
Max ERC Funding
1 355 688 €
Duration
Start date: 2021-04-01, End date: 2026-03-31
Project acronym DA QC
Project Design Automation for Quantum Computing
Researcher (PI) Robert WILLE
Host Institution (HI) UNIVERSITAT LINZ
Country Austria
Call Details Consolidator Grant (CoG), PE6, ERC-2020-COG
Summary "In the 1970s, researchers started to utilize quantum mechanics to address questions in computer science and information theory—
establishing new research directions such as quantum computing. Now, more than four decades later, we are at the dawn of a new
""computing age"" in which quantum computers indeed will find its way into practical application. However, while impressive
accomplishments can be observed in the physical realization of quantum computers, the development of automated tools and
methods that provide assistance in the design and realization of applications for those devices is at risk of not being able to keep up
with this development anymore—leaving a situation where we might have powerful quantum computers but hardly any proper
means to actually use them.
This project aims to provide a solution for this upcoming design gap by developing efficient and practical relevant methods for
corresponding simulation, synthesis, and verification tasks. While the current state of the art severely suffers from the
interdisciplinarity of quantum computing (leading to the consideration of inappropriate models, inconsistent interpretations, or
""wrong"" problem formulations), this project will build a bridge between the design automation community and the quantum
computing community. This will allow to fully exploit the potential of design automation which is hardly utilized in quantum
computing yet.
Since quantum computers are reaching feasibility, the methods developed within this project will be highly demanded. Preliminary
investigations conducted in preparation of this proposal already showed the potential of such an endeavor and raised significant
attention in the quantum computing community and its ""big players"". The project plans to continue these developments on a larger
scale—eventually providing the foundation for design automation methods that accomplish for quantum computing what the
design automation community realized for classical electronic circuits."
Summary
"In the 1970s, researchers started to utilize quantum mechanics to address questions in computer science and information theory—
establishing new research directions such as quantum computing. Now, more than four decades later, we are at the dawn of a new
""computing age"" in which quantum computers indeed will find its way into practical application. However, while impressive
accomplishments can be observed in the physical realization of quantum computers, the development of automated tools and
methods that provide assistance in the design and realization of applications for those devices is at risk of not being able to keep up
with this development anymore—leaving a situation where we might have powerful quantum computers but hardly any proper
means to actually use them.
This project aims to provide a solution for this upcoming design gap by developing efficient and practical relevant methods for
corresponding simulation, synthesis, and verification tasks. While the current state of the art severely suffers from the
interdisciplinarity of quantum computing (leading to the consideration of inappropriate models, inconsistent interpretations, or
""wrong"" problem formulations), this project will build a bridge between the design automation community and the quantum
computing community. This will allow to fully exploit the potential of design automation which is hardly utilized in quantum
computing yet.
Since quantum computers are reaching feasibility, the methods developed within this project will be highly demanded. Preliminary
investigations conducted in preparation of this proposal already showed the potential of such an endeavor and raised significant
attention in the quantum computing community and its ""big players"". The project plans to continue these developments on a larger
scale—eventually providing the foundation for design automation methods that accomplish for quantum computing what the
design automation community realized for classical electronic circuits."
Max ERC Funding
1 979 552 €
Duration
Start date: 2021-07-01, End date: 2026-06-30
Project acronym DistOpt-BydWorstCase
Project Distributed Optimization Beyond Worst-Case Topologies
Researcher (PI) Bernhard Haeupler
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Country Switzerland
Call Details Starting Grant (StG), PE6, ERC-2020-STG
Summary Modern systems are increasingly dezentralized and massively distributed computations play a vital role in the systems of the future. This project aims to advance the foundational aspects of distributed computing, particularly MessagePassing algorithms for optimization problems.
The strong shift towards distributed systems also lead to a flurry of works on such algorithms -- many optimization problems now have distributed algorithms with optimal worst-case performance guarantees. Generally, these guarantees cannot be improved because they match unconditional impossibility results, which prove severe limitations on the performance of distributed algorithms in some (pathological) network topologies. Real world networks, however, are never worst-case and do not share the limiting bottleneck characteristics of these pathological topologies. In fact, there is no known barrier for ultra-fast polylogarithmic-round distributed algorithms on any network of interest. This leaves an exponential gap between current worst-case-optimal algorithms and what is likely possible in many, if not all, real-world settings.
Motivated by this, this project provides a program for a general toolbox and theory for MessagePassing optimization algorithms that go beyond worst-case topologies. The main and guiding high-risk high-gain goal is the development of universally optimal distributed algorithms, which are competitive with the best algorithm on any given topology. This would constitute the strongest possible form of algorithmically adjusting to non-worst-case topologies. A detailed program with many concrete and smaller stepping stones towards this ambitious breakthrough objective is provided. Many of the novel questions stemming from this program and proposal are highly interdisciplinary, crossing boundaries between information theory, distributed computing, topological graph theory, and other parts of theoretical computer science, and are fundamental and interesting in their own right.
Summary
Modern systems are increasingly dezentralized and massively distributed computations play a vital role in the systems of the future. This project aims to advance the foundational aspects of distributed computing, particularly MessagePassing algorithms for optimization problems.
The strong shift towards distributed systems also lead to a flurry of works on such algorithms -- many optimization problems now have distributed algorithms with optimal worst-case performance guarantees. Generally, these guarantees cannot be improved because they match unconditional impossibility results, which prove severe limitations on the performance of distributed algorithms in some (pathological) network topologies. Real world networks, however, are never worst-case and do not share the limiting bottleneck characteristics of these pathological topologies. In fact, there is no known barrier for ultra-fast polylogarithmic-round distributed algorithms on any network of interest. This leaves an exponential gap between current worst-case-optimal algorithms and what is likely possible in many, if not all, real-world settings.
Motivated by this, this project provides a program for a general toolbox and theory for MessagePassing optimization algorithms that go beyond worst-case topologies. The main and guiding high-risk high-gain goal is the development of universally optimal distributed algorithms, which are competitive with the best algorithm on any given topology. This would constitute the strongest possible form of algorithmically adjusting to non-worst-case topologies. A detailed program with many concrete and smaller stepping stones towards this ambitious breakthrough objective is provided. Many of the novel questions stemming from this program and proposal are highly interdisciplinary, crossing boundaries between information theory, distributed computing, topological graph theory, and other parts of theoretical computer science, and are fundamental and interesting in their own right.
Max ERC Funding
1 540 000 €
Duration
Start date: 2020-09-01, End date: 2025-08-31
Project acronym DISTRES
Project A Graph Theoretic Approach for Resilient Distributed Algorithms
Researcher (PI) Merav Parter
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Country Israel
Call Details Starting Grant (StG), PE6, ERC-2020-STG
Summary Following the immense recent advances in distributed networks, the explosive growth of the Internet, and our increased dependence on these infrastructures, guaranteeing the uninterrupted operation of communication networks has become major objective in network algorithms. The modern instantiations of distributed networks, such as, the Bitcoin network and cloud computing, introduce in addition, new security challenges that deserve urgent attention in both theory and practice. The goal of this project is to develop a unified framework for obtaining fast, resilient and secure distributed algorithms for fundamental graph problems. We will be focusing on three main objectives: 1. Developing efficient distributed algorithms that handle various adversarial settings, such as, node crashes and Byzantine attacks. 2. Initiating and establishing the theoretical exploration of security in distributed graph algorithms. Such a notion has been addressed before mainly in the context of secure multi-party computation (MPC). The heart of our approach is to develop new graph theoretical infrastructures to provide graphical secure channels between nodes in a communication network of an arbitrary topology. 3. Exploring the power of interaction between an untrusted prover and a distributed verifier. This model touches upon central theoretical concepts concerning randomness, communication, and their interplay with the underlying graph. The main novelty in addressing these objectives is in our approach, which is based on taking a graph theoretic perspective where common notions of resilience requirements will be translated into suitably tailored combinatorial graph structures. We believe that the proposed plan will deepen the theoretical foundations for resilient distributed computation, strengthen the connections with the areas of fault tolerant network design and information theoretic security, and provide a refreshing perspective on extensively-studied graph theoretical concepts.
Summary
Following the immense recent advances in distributed networks, the explosive growth of the Internet, and our increased dependence on these infrastructures, guaranteeing the uninterrupted operation of communication networks has become major objective in network algorithms. The modern instantiations of distributed networks, such as, the Bitcoin network and cloud computing, introduce in addition, new security challenges that deserve urgent attention in both theory and practice. The goal of this project is to develop a unified framework for obtaining fast, resilient and secure distributed algorithms for fundamental graph problems. We will be focusing on three main objectives: 1. Developing efficient distributed algorithms that handle various adversarial settings, such as, node crashes and Byzantine attacks. 2. Initiating and establishing the theoretical exploration of security in distributed graph algorithms. Such a notion has been addressed before mainly in the context of secure multi-party computation (MPC). The heart of our approach is to develop new graph theoretical infrastructures to provide graphical secure channels between nodes in a communication network of an arbitrary topology. 3. Exploring the power of interaction between an untrusted prover and a distributed verifier. This model touches upon central theoretical concepts concerning randomness, communication, and their interplay with the underlying graph. The main novelty in addressing these objectives is in our approach, which is based on taking a graph theoretic perspective where common notions of resilience requirements will be translated into suitably tailored combinatorial graph structures. We believe that the proposed plan will deepen the theoretical foundations for resilient distributed computation, strengthen the connections with the areas of fault tolerant network design and information theoretic security, and provide a refreshing perspective on extensively-studied graph theoretical concepts.
Max ERC Funding
1 450 085 €
Duration
Start date: 2020-11-01, End date: 2025-10-31
Project acronym DynASoAr
Project Dynamic Algorithms Against Strong Adversaries
Researcher (PI) Sebastian Forster
Host Institution (HI) PARIS-LODRON-UNIVERSITAT SALZBURG
Country Austria
Call Details Starting Grant (StG), PE6, ERC-2020-STG
Summary From a procedural viewpoint, an algorithm is a list of instructions that transforms a given input into the desired output. While this paradigm has been successfully applied in theory and practice, it completely neglects the fact that in many scenarios the input is not given to the algorithm in its entirety at the beginning and might undergo changes that the algorithm needs to react to. Formally, such a situation can be modeled as a game between an adversary creating the sequence of updates to the input and an algorithm that tries to process each of these updates as fast as possible. Researchers have studied such dynamic problems with increasing interest in the past years.
However, many state-of-the-art solutions suffer from at least one of the following drawbacks: (1) Many dynamic algorithms are randomized and assume that the sequence of updates is independent of the random choices made by the algorithm. This is not justified in situations where the next update to the input naturally depends on the previous outputs of the algorithm. (2) Many dynamic algorithms achieve amortized running time guarantees where the stated guarantee on processing each update only holds on average and occasionally significantly more time might be needed. This is insufficient for real-time systems requiring hard worst-case guarantees. The goal of this project is to design dynamic algorithms free from these two shortcomings. Formally, this amounts to giving the adversary the following additional powers: (1) adapting its update sequence to the outputs of the algorithm and (2) discarding the algorithm if some update is not processed in time. While isolated results in this direction exist, with some of them obtained by the PI, the unique feature of this project is the systematic study of these stronger adversarial models for otherwise well-studied dynamic problems. Our results will facilitate the use of dynamic algorithms in both real-world applications and in the design of static algorithms.
Summary
From a procedural viewpoint, an algorithm is a list of instructions that transforms a given input into the desired output. While this paradigm has been successfully applied in theory and practice, it completely neglects the fact that in many scenarios the input is not given to the algorithm in its entirety at the beginning and might undergo changes that the algorithm needs to react to. Formally, such a situation can be modeled as a game between an adversary creating the sequence of updates to the input and an algorithm that tries to process each of these updates as fast as possible. Researchers have studied such dynamic problems with increasing interest in the past years.
However, many state-of-the-art solutions suffer from at least one of the following drawbacks: (1) Many dynamic algorithms are randomized and assume that the sequence of updates is independent of the random choices made by the algorithm. This is not justified in situations where the next update to the input naturally depends on the previous outputs of the algorithm. (2) Many dynamic algorithms achieve amortized running time guarantees where the stated guarantee on processing each update only holds on average and occasionally significantly more time might be needed. This is insufficient for real-time systems requiring hard worst-case guarantees. The goal of this project is to design dynamic algorithms free from these two shortcomings. Formally, this amounts to giving the adversary the following additional powers: (1) adapting its update sequence to the outputs of the algorithm and (2) discarding the algorithm if some update is not processed in time. While isolated results in this direction exist, with some of them obtained by the PI, the unique feature of this project is the systematic study of these stronger adversarial models for otherwise well-studied dynamic problems. Our results will facilitate the use of dynamic algorithms in both real-world applications and in the design of static algorithms.
Max ERC Funding
1 499 843 €
Duration
Start date: 2021-09-01, End date: 2026-08-31
Project acronym EVA
Project Expectational Visual Artificial Intelligence
Researcher (PI) Efstratios Gavves
Host Institution (HI) UNIVERSITEIT VAN AMSTERDAM
Country Netherlands
Call Details Starting Grant (StG), PE6, ERC-2020-STG
Summary Visual artificial intelligence automatically interprets what happens in visual data like videos. Today’s research strives with queries like: “Is this person playing basketball?”; “Find the location of the brain stroke”; or “Track the glacier fractures in satellite footage”. All these queries are about visual observations already taken place. Today’s algorithms focus on explaining past visual observations. Naturally, not all queries are about the past: “Will this person draw something in or out of their pocket?”; “Where will the tumour be in 5 seconds given breathing patterns and moving organs?”; or “How will the glacier fracture given the current motion and melting patterns?”. For these queries and all others, the next generation of visual algorithms must expect what happens next given past visual observations. Visual artificial intelligence must also be able to prevent before the fact, rather than explain only after it. I propose an ambitious 5-year project to design algorithms that learn to expect the possible futures from visual sequences.
The main challenge for expecting possible futures is having visual algorithms that learn temporality in visual sequences. Today’s algorithms cannot do this convincingly. First, they are time-deterministic and ignore uncertainty, part of any expected future. I propose time-stochastic visual algorithms. Second, today’s algorithms are time-extrinsic and treat time as an external input or output variable. I propose time-intrinsic visual algorithms that integrate time within their latent representations. Third, visual algorithms must account for all innumerable spatiotemporal dynamics, despite their finite nature. I propose time-geometric visual algorithms that constrain temporal latent spaces to known geometries.
EVA addresses fundamental research issues in the automatic interpretation of future visual sequences. Its results will serve as a basis for ground-breaking technological advances in practical vision applications.
Summary
Visual artificial intelligence automatically interprets what happens in visual data like videos. Today’s research strives with queries like: “Is this person playing basketball?”; “Find the location of the brain stroke”; or “Track the glacier fractures in satellite footage”. All these queries are about visual observations already taken place. Today’s algorithms focus on explaining past visual observations. Naturally, not all queries are about the past: “Will this person draw something in or out of their pocket?”; “Where will the tumour be in 5 seconds given breathing patterns and moving organs?”; or “How will the glacier fracture given the current motion and melting patterns?”. For these queries and all others, the next generation of visual algorithms must expect what happens next given past visual observations. Visual artificial intelligence must also be able to prevent before the fact, rather than explain only after it. I propose an ambitious 5-year project to design algorithms that learn to expect the possible futures from visual sequences.
The main challenge for expecting possible futures is having visual algorithms that learn temporality in visual sequences. Today’s algorithms cannot do this convincingly. First, they are time-deterministic and ignore uncertainty, part of any expected future. I propose time-stochastic visual algorithms. Second, today’s algorithms are time-extrinsic and treat time as an external input or output variable. I propose time-intrinsic visual algorithms that integrate time within their latent representations. Third, visual algorithms must account for all innumerable spatiotemporal dynamics, despite their finite nature. I propose time-geometric visual algorithms that constrain temporal latent spaces to known geometries.
EVA addresses fundamental research issues in the automatic interpretation of future visual sequences. Its results will serve as a basis for ground-breaking technological advances in practical vision applications.
Max ERC Funding
1 499 562 €
Duration
Start date: 2020-12-01, End date: 2025-11-30