Project acronym 2D–SYNETRA
Project Two-dimensional colloidal nanostructures - Synthesis and electrical transport
Researcher (PI) Christian Klinke
Host Institution (HI) UNIVERSITAET HAMBURG
Call Details Starting Grant (StG), PE4, ERC-2012-StG_20111012
Summary We propose to develop truly two-dimensional continuous materials and two-dimensional monolayer films composed of individual nanocrystals by the comparatively fast, inexpensive, and scalable colloidal synthesis method. The materials’ properties will be studied in detail, especially regarding their (photo-) electrical transport. This will allow developing new types of device structures, such as Coulomb blockade and field enhancement based transistors.
Recently, we demonstrated the possibility to synthesize in a controlled manner truly two-dimensional colloidal nanostructures. We will investigate their formation mechanism, synthesize further materials as “nanosheets”, develop methodologies to tune their geometrical properties, and study their (photo-) electrical properties.
Furthermore, we will use the Langmuir-Blodgett method to deposit highly ordered monolayers of monodisperse nanoparticles. Such structures show interesting transport properties governed by Coulomb blockade effects known from individual nanoparticles. This leads to semiconductor-like behavior in metal nanoparticle films. The understanding of the electric transport in such “multi-tunnel devices” is still very limited. Thus, we will investigate this concept in detail and take it to its limits. Beside improvement of quality and exchange of material we will tune the nanoparticles’ size and shape in order to gain a deeper understanding of the electrical properties of supercrystallographic assemblies. Furthermore, we will develop device concepts for diode and transistor structures which take into account the novel properties of the low-dimensional assemblies.
Nanosheets and monolayers of nanoparticles truly follow the principle of building devices by the bottom-up approach and allow electric transport measurements in a 2D regime. Highly ordered nanomaterial systems possess easy and reliably to manipulate electronic properties what make them interesting for future (inexpensive) electronic devices.
Summary
We propose to develop truly two-dimensional continuous materials and two-dimensional monolayer films composed of individual nanocrystals by the comparatively fast, inexpensive, and scalable colloidal synthesis method. The materials’ properties will be studied in detail, especially regarding their (photo-) electrical transport. This will allow developing new types of device structures, such as Coulomb blockade and field enhancement based transistors.
Recently, we demonstrated the possibility to synthesize in a controlled manner truly two-dimensional colloidal nanostructures. We will investigate their formation mechanism, synthesize further materials as “nanosheets”, develop methodologies to tune their geometrical properties, and study their (photo-) electrical properties.
Furthermore, we will use the Langmuir-Blodgett method to deposit highly ordered monolayers of monodisperse nanoparticles. Such structures show interesting transport properties governed by Coulomb blockade effects known from individual nanoparticles. This leads to semiconductor-like behavior in metal nanoparticle films. The understanding of the electric transport in such “multi-tunnel devices” is still very limited. Thus, we will investigate this concept in detail and take it to its limits. Beside improvement of quality and exchange of material we will tune the nanoparticles’ size and shape in order to gain a deeper understanding of the electrical properties of supercrystallographic assemblies. Furthermore, we will develop device concepts for diode and transistor structures which take into account the novel properties of the low-dimensional assemblies.
Nanosheets and monolayers of nanoparticles truly follow the principle of building devices by the bottom-up approach and allow electric transport measurements in a 2D regime. Highly ordered nanomaterial systems possess easy and reliably to manipulate electronic properties what make them interesting for future (inexpensive) electronic devices.
Max ERC Funding
1 497 200 €
Duration
Start date: 2013-02-01, End date: 2019-01-31
Project acronym BeyondWorstCase
Project Algorithms beyond the Worst Case
Researcher (PI) Heiko Roglin
Host Institution (HI) RHEINISCHE FRIEDRICH-WILHELMS-UNIVERSITAT BONN
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary For many optimization problems that arise in logistics, information retrieval, and other contexts the classical theory of algorithms has lost its grip on reality because it is based on a pessimistic worst-case perspective, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. This does not take into consideration that worst-case inputs are often rather contrived and occur only rarely in practical applications. It led to the situation that for many problems the classical theory is not able to differentiate meaningfully between different algorithms. Even worse, for some important problems it recommends algorithms that perform badly in practice over algorithms that work well in practice only because the artificial worst-case performance of the latter ones is bad.
We will study classic optimization problems (traveling salesperson problem, linear programming, etc.) as well as problems coming from machine learning and information retrieval. All these problems have in common that the practically most successful algorithms have a devastating worst-case performance even though they clearly outperform the theoretically best algorithms.
Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. This project will play a major role in this paradigm shift by developing and exploring novel theoretical approaches (e.g. smoothed analysis) to reconcile theory and practice. A more realistic theory will have a profound impact on the design and analysis of algorithms in the future, and the insights gained in this project will lead to algorithmic tools for large-scale optimization problems that improve on existing ad hoc methods. We will not only work theoretically but also test the applicability of our theoretical considerations in experimental studies.
Summary
For many optimization problems that arise in logistics, information retrieval, and other contexts the classical theory of algorithms has lost its grip on reality because it is based on a pessimistic worst-case perspective, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. This does not take into consideration that worst-case inputs are often rather contrived and occur only rarely in practical applications. It led to the situation that for many problems the classical theory is not able to differentiate meaningfully between different algorithms. Even worse, for some important problems it recommends algorithms that perform badly in practice over algorithms that work well in practice only because the artificial worst-case performance of the latter ones is bad.
We will study classic optimization problems (traveling salesperson problem, linear programming, etc.) as well as problems coming from machine learning and information retrieval. All these problems have in common that the practically most successful algorithms have a devastating worst-case performance even though they clearly outperform the theoretically best algorithms.
Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. This project will play a major role in this paradigm shift by developing and exploring novel theoretical approaches (e.g. smoothed analysis) to reconcile theory and practice. A more realistic theory will have a profound impact on the design and analysis of algorithms in the future, and the insights gained in this project will lead to algorithmic tools for large-scale optimization problems that improve on existing ad hoc methods. We will not only work theoretically but also test the applicability of our theoretical considerations in experimental studies.
Max ERC Funding
1 235 820 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym CV-SUPER
Project Computer Vision for Scene Understanding from a first-person Perspective
Researcher (PI) Bastian Leibe
Host Institution (HI) RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "The goal of CV-SUPER is to create the technology to perform dynamic visual scene understanding from the perspective of a moving human observer. Briefly stated, we want to enable computers to see and understand what humans see when they navigate their way through busy inner-city locations. Our target scenario is dynamic visual scene understanding in public spaces, such as pedestrian zones, shopping malls, or other locations primarily designed for humans. CV-SUPER will develop computer vision algorithms that can observe the people populating those spaces, interpret and understand their actions and their interactions with other people and inanimate objects, and from this understanding derive predictions of their future behaviors within the next few seconds. In addition, we will develop methods to infer semantic properties of the observed environment and learn to recognize how those affect people’s actions. Supporting those tasks, we will develop a novel design of an object recognition system that scales up to potentially hundreds of categories. Finally, we will bind all those components together in a dynamic 3D world model, showing the world’s current state and facilitating predictions how this state will most likely change within the next few seconds. These are crucial capabilities for the creation of technical systems that may one day assist humans in their daily lives within such busy spaces, e.g., in the form of personal assistance devices for elderly or visually impaired people or in the form of future generations of mobile service robots and intelligent vehicles."
Summary
"The goal of CV-SUPER is to create the technology to perform dynamic visual scene understanding from the perspective of a moving human observer. Briefly stated, we want to enable computers to see and understand what humans see when they navigate their way through busy inner-city locations. Our target scenario is dynamic visual scene understanding in public spaces, such as pedestrian zones, shopping malls, or other locations primarily designed for humans. CV-SUPER will develop computer vision algorithms that can observe the people populating those spaces, interpret and understand their actions and their interactions with other people and inanimate objects, and from this understanding derive predictions of their future behaviors within the next few seconds. In addition, we will develop methods to infer semantic properties of the observed environment and learn to recognize how those affect people’s actions. Supporting those tasks, we will develop a novel design of an object recognition system that scales up to potentially hundreds of categories. Finally, we will bind all those components together in a dynamic 3D world model, showing the world’s current state and facilitating predictions how this state will most likely change within the next few seconds. These are crucial capabilities for the creation of technical systems that may one day assist humans in their daily lives within such busy spaces, e.g., in the form of personal assistance devices for elderly or visually impaired people or in the form of future generations of mobile service robots and intelligent vehicles."
Max ERC Funding
1 499 960 €
Duration
Start date: 2012-11-01, End date: 2017-10-31
Project acronym DEPENDABLECLOUD
Project Towards the dependable cloud:
Building the foundations for tomorrow's dependable cloud computing
Researcher (PI) Rodrigo Seromenho Miragaia Rodrigues
Host Institution (HI) INESC ID - INSTITUTO DE ENGENHARIADE SISTEMAS E COMPUTADORES, INVESTIGACAO E DESENVOLVIMENTO EM LISBOA
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Cloud computing is being increasingly adopted by individuals, organizations, and governments. However, as the computations that are offloaded to the cloud expand to societal-critical services, the dependability requirements of cloud services become much higher, and we need to ensure that the infrastructure that supports these services is ready to meet these requirements. In particular, this proposal tackles the challenges that arise from two distinctive characteristic of the cloud infrastructure.
The first is that non-crash faults, despite being considered highly unlikely by the designers of traditional systems, become commonplace at the scale and complexity of the cloud infrastructure. We argue that the current ad-hoc methods for handling these faults are insufficient, and that the only principled approach of assuming Byzantine faults is too pessimistic. Therefore, we call for a new systematic approach to tolerating non-crash, non-adversarial faults. This requires the definition of a new fault model, and the construction of a series of building blocks and key protocol elements that enable the construction of fault-tolerant cloud services.
The second issue is that to meet their scalability requirements, cloud services spread their state across multiple data centers, and direct users to the closest one. This raises the issue that not all operations can be executed optimistically, without being aware of concurrent operations over the same data, and thus multiple levels of consistency must coexist. However, this puts the onus of reasoning about which behaviors are allowed under such a hybrid consistency model on the programmer of the service. We propose a systematic solution to this problem, which includes a novel consistency model that allows for developing highly scalable services that are fast when possible and consistent when necessary, and a labeling methodology to guide the programmer in deciding which operations can run at each consistency level.
Summary
Cloud computing is being increasingly adopted by individuals, organizations, and governments. However, as the computations that are offloaded to the cloud expand to societal-critical services, the dependability requirements of cloud services become much higher, and we need to ensure that the infrastructure that supports these services is ready to meet these requirements. In particular, this proposal tackles the challenges that arise from two distinctive characteristic of the cloud infrastructure.
The first is that non-crash faults, despite being considered highly unlikely by the designers of traditional systems, become commonplace at the scale and complexity of the cloud infrastructure. We argue that the current ad-hoc methods for handling these faults are insufficient, and that the only principled approach of assuming Byzantine faults is too pessimistic. Therefore, we call for a new systematic approach to tolerating non-crash, non-adversarial faults. This requires the definition of a new fault model, and the construction of a series of building blocks and key protocol elements that enable the construction of fault-tolerant cloud services.
The second issue is that to meet their scalability requirements, cloud services spread their state across multiple data centers, and direct users to the closest one. This raises the issue that not all operations can be executed optimistically, without being aware of concurrent operations over the same data, and thus multiple levels of consistency must coexist. However, this puts the onus of reasoning about which behaviors are allowed under such a hybrid consistency model on the programmer of the service. We propose a systematic solution to this problem, which includes a novel consistency model that allows for developing highly scalable services that are fast when possible and consistent when necessary, and a labeling methodology to guide the programmer in deciding which operations can run at each consistency level.
Max ERC Funding
1 076 084 €
Duration
Start date: 2012-10-01, End date: 2018-01-31
Project acronym EN-LUMINATE
Project Enhancing and Tuning Electroluminescence with Nanoantennas
Researcher (PI) Jana Zaumseil
Host Institution (HI) RUPRECHT-KARLS-UNIVERSITAET HEIDELBERG
Call Details Starting Grant (StG), PE4, ERC-2012-StG_20111012
Summary "Being able to enhance and tune the interaction of a light wave with a molecule or nanoparticle on a fundamental level opens up an exciting range of applications such as more efficient solar cells, more sensitive photon detectors and brighter emitters for lighting applications. Nanoplasmonics promises to offer this level of control. Taking the current knowledge on nanoantennas a step further we will integrate them in organic and carbon-nanotube light-emitting devices to improve and tune their emission in unprecedented ways. As our testing platform we will use light-emitting field-effect transistors (LEFETs). Their planar structure, where the light emission zone can be positioned at any point allows for easy and controlled incorporation of plasmonic structures without interfering with charge transport. LEFETs can be made from a wide range of semiconducting materials. We will apply nanoantennas in LEFETs to 1) Enhance electroluminescence of high mobility organic semiconductors 2) Tune excitation decay and transition selection rules in organic semiconductors and 3) Enhance photo- and electroluminescence of single-walled carbon nanotubes. All of these materials offer high carrier mobilities and therefore high currents but have very low fluorescence efficiencies that can be improved substantially by nanoantennas. We will study the influence of nanoantennas on the fundamental emission properties of these different types of emitters. At the same time we will improve their efficiency in light-emitting devices and thus enable new and innovative applications."
Summary
"Being able to enhance and tune the interaction of a light wave with a molecule or nanoparticle on a fundamental level opens up an exciting range of applications such as more efficient solar cells, more sensitive photon detectors and brighter emitters for lighting applications. Nanoplasmonics promises to offer this level of control. Taking the current knowledge on nanoantennas a step further we will integrate them in organic and carbon-nanotube light-emitting devices to improve and tune their emission in unprecedented ways. As our testing platform we will use light-emitting field-effect transistors (LEFETs). Their planar structure, where the light emission zone can be positioned at any point allows for easy and controlled incorporation of plasmonic structures without interfering with charge transport. LEFETs can be made from a wide range of semiconducting materials. We will apply nanoantennas in LEFETs to 1) Enhance electroluminescence of high mobility organic semiconductors 2) Tune excitation decay and transition selection rules in organic semiconductors and 3) Enhance photo- and electroluminescence of single-walled carbon nanotubes. All of these materials offer high carrier mobilities and therefore high currents but have very low fluorescence efficiencies that can be improved substantially by nanoantennas. We will study the influence of nanoantennas on the fundamental emission properties of these different types of emitters. At the same time we will improve their efficiency in light-emitting devices and thus enable new and innovative applications."
Max ERC Funding
1 496 684 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym MOLMESON
Project Molecular Mesoscopics for Organic Nano-Optoelectronics
Researcher (PI) John Mark Lupton
Host Institution (HI) UNIVERSITAET REGENSBURG
Call Details Starting Grant (StG), PE4, ERC-2012-StG_20111012
Summary This project explors the boundary between the individual molecule and the bulk solid in the context of polymeric organic semiconductors by constructing and studying molecular aggregates from the single molecule level upwards. Using time-resolved and steady-state spectroscopies at elevated and at cryogenic temperatures, the interaction of individual molecular units will be revealed. For example, the question arises as to how large a molecular aggregate can become to still behave as an individual quantum-mechanical entity, emitting just one photon at a time. How far can photoexcitations migrate in self-organized mesoscopic aggregates, and what is the interaction length with quenching species such as charges? Under which conditions does the coupling between molecular units weaken to become incoherent and irreversible? The work program combines routes to controlling self-assembly in-situ and monitoring conformational dynamics of the polymer chain as well as aggregation effects in real-time. Superresolution microscopic techniques will be applied to spatially localize excitations on a polymer chain and watch their migration. Single-molecule fluorescence will be combined with spin-resonance techniques to study charge formation und unravel radical-based material breakdown processes. Besides this bottom-up control of spectroscopic features, a top-down approach to device engineering will be explored with the goal of identifying the smallest-possible device features below which the effects of discreteness dominate leading to single-electron and single-photon devices. Breakthroughs with implications beyond organic electronics are anticipated, since the materials provide models for polymer physics, quantum optics and solid-state mesoscopics. Sensory functions are expected to derive from the control and understanding of light-matter interactions on super-molecular sub-ensemble length scales.
Summary
This project explors the boundary between the individual molecule and the bulk solid in the context of polymeric organic semiconductors by constructing and studying molecular aggregates from the single molecule level upwards. Using time-resolved and steady-state spectroscopies at elevated and at cryogenic temperatures, the interaction of individual molecular units will be revealed. For example, the question arises as to how large a molecular aggregate can become to still behave as an individual quantum-mechanical entity, emitting just one photon at a time. How far can photoexcitations migrate in self-organized mesoscopic aggregates, and what is the interaction length with quenching species such as charges? Under which conditions does the coupling between molecular units weaken to become incoherent and irreversible? The work program combines routes to controlling self-assembly in-situ and monitoring conformational dynamics of the polymer chain as well as aggregation effects in real-time. Superresolution microscopic techniques will be applied to spatially localize excitations on a polymer chain and watch their migration. Single-molecule fluorescence will be combined with spin-resonance techniques to study charge formation und unravel radical-based material breakdown processes. Besides this bottom-up control of spectroscopic features, a top-down approach to device engineering will be explored with the goal of identifying the smallest-possible device features below which the effects of discreteness dominate leading to single-electron and single-photon devices. Breakthroughs with implications beyond organic electronics are anticipated, since the materials provide models for polymer physics, quantum optics and solid-state mesoscopics. Sensory functions are expected to derive from the control and understanding of light-matter interactions on super-molecular sub-ensemble length scales.
Max ERC Funding
1 480 556 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym NOLEPRO
Project Nonlinear Eigenproblems for Data Analysis
Researcher (PI) Matthias Hein
Host Institution (HI) UNIVERSITAT DES SAARLANDES
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary In machine learning and exploratory data analysis, the major goal is the development
of solutions for the automatic and efficient extraction of knowledge from data. This
ability is key for further progress in science and engineering. A large class of
data analysis methods is based on linear eigenproblems. While linear eigenproblems are
well studied, and a large part of numerical linear algebra is dedicated to the efficient
calculation of eigenvectors of all kinds of structured matrices, they are limited in their
modeling capabilities. Important properties like robustness against outliers
and sparsity of the eigenvectors are impossible to realize. In turn, we have shown recently
that many problems in data analysis can be naturally formulated as nonlinear eigenproblems.
In order to use the rich structure of nonlinear eigenproblems with an ease
similar to that of linear eigenproblems, a major goal of this proposal is to develop a general
framework for the computation of nonlinear eigenvectors. Furthermore, the great potential of nonlinear eigenproblems will be explored in various application areas. As the scope of nonlinear eigenproblems goes far beyond data analysis, this project will have major impact not only in machine learning and its use in computer vision, bioinformatics, and information retrieval, but also in other areas of the natural sciences.
Summary
In machine learning and exploratory data analysis, the major goal is the development
of solutions for the automatic and efficient extraction of knowledge from data. This
ability is key for further progress in science and engineering. A large class of
data analysis methods is based on linear eigenproblems. While linear eigenproblems are
well studied, and a large part of numerical linear algebra is dedicated to the efficient
calculation of eigenvectors of all kinds of structured matrices, they are limited in their
modeling capabilities. Important properties like robustness against outliers
and sparsity of the eigenvectors are impossible to realize. In turn, we have shown recently
that many problems in data analysis can be naturally formulated as nonlinear eigenproblems.
In order to use the rich structure of nonlinear eigenproblems with an ease
similar to that of linear eigenproblems, a major goal of this proposal is to develop a general
framework for the computation of nonlinear eigenvectors. Furthermore, the great potential of nonlinear eigenproblems will be explored in various application areas. As the scope of nonlinear eigenproblems goes far beyond data analysis, this project will have major impact not only in machine learning and its use in computer vision, bioinformatics, and information retrieval, but also in other areas of the natural sciences.
Max ERC Funding
1 271 992 €
Duration
Start date: 2012-10-01, End date: 2017-09-30
Project acronym pcCell
Project Physicochemical principles of efficient information processing in biological cells
Researcher (PI) Frank Noe
Host Institution (HI) FREIE UNIVERSITAET BERLIN
Call Details Starting Grant (StG), PE4, ERC-2012-StG_20111012
Summary Biological cellular function relies on the coordination of many information processing steps in which specific biomolecular complexes are formed. But how can proteins and ligands find their targets in extremely crowded cellular membranes or cytosol? Here, I propose that intracellular signal processing depends upon the spatiotemporal order of molecules arising from “dynamical sorting”, i.e. no stable structure may exist at any time and yet the molecules might be ordered at all times. I propose to investigate the existence and the physicochemical driving forces of such dynamical sorting mechanisms in selected neuronal synaptic membrane proteins that must be tightly coordinated during neurotransmission and recovery. To this end, massive atomistic and coarse-grained molecular simulations will be combined with statistical mechanical theory, producing predictions to be experimentally validated through our collaborators.
The main methodological challenge of this proposal is the infamous sampling problem: Even with immense computational effort, unbiased molecular dynamics trajectory lengths are at most microseconds for the protein systems considered here. This is insufficient to calculate relevant states, probabilities, and rates for these systems. Based on recent mathematical and computational groundwork developed by us and others, I propose to develop enhanced sampling methods that will yield an effective speedup of at least 4 orders of magnitude over current simulations. This will allow protein-ligand and protein-protein interactions to be sampled efficiently using atomistic models, thus having far reaching impact on the field.
The proposed project is at the forefront of an interdisciplinary field, spanning traditional areas such as physical chemistry, computer science, mathematics, and biology. The project claims fundamental advances in both simulation techniques and in the understanding of the physicochemical principles that govern the functionality of the cell
Summary
Biological cellular function relies on the coordination of many information processing steps in which specific biomolecular complexes are formed. But how can proteins and ligands find their targets in extremely crowded cellular membranes or cytosol? Here, I propose that intracellular signal processing depends upon the spatiotemporal order of molecules arising from “dynamical sorting”, i.e. no stable structure may exist at any time and yet the molecules might be ordered at all times. I propose to investigate the existence and the physicochemical driving forces of such dynamical sorting mechanisms in selected neuronal synaptic membrane proteins that must be tightly coordinated during neurotransmission and recovery. To this end, massive atomistic and coarse-grained molecular simulations will be combined with statistical mechanical theory, producing predictions to be experimentally validated through our collaborators.
The main methodological challenge of this proposal is the infamous sampling problem: Even with immense computational effort, unbiased molecular dynamics trajectory lengths are at most microseconds for the protein systems considered here. This is insufficient to calculate relevant states, probabilities, and rates for these systems. Based on recent mathematical and computational groundwork developed by us and others, I propose to develop enhanced sampling methods that will yield an effective speedup of at least 4 orders of magnitude over current simulations. This will allow protein-ligand and protein-protein interactions to be sampled efficiently using atomistic models, thus having far reaching impact on the field.
The proposed project is at the forefront of an interdisciplinary field, spanning traditional areas such as physical chemistry, computer science, mathematics, and biology. The project claims fundamental advances in both simulation techniques and in the understanding of the physicochemical principles that govern the functionality of the cell
Max ERC Funding
1 397 328 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym SUBLINEAR
Project Sublinear algorithms for the analysis of very large graphs
Researcher (PI) Christian Sohler
Host Institution (HI) TECHNISCHE UNIVERSITAT DORTMUND
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Large graphs appear in many application areas. Typical examples are the webgraph, the internet graph, friendship graphs of social networks like facebook or Google+, citation graphs, collaboration graphs, and transportation networks.
The structure of these graphs contains valuable information but their size makes them very hard to analyze. We need special algorithm that analyze the graph structure via random sampling.
The main objective of the proposed project is to advance our understanding of the foundations of sampling processes for the analysis of the structure of large graphs. We will use the approach of Property Testing, a theoretical framework to analyze such sampling algorithms.
Summary
Large graphs appear in many application areas. Typical examples are the webgraph, the internet graph, friendship graphs of social networks like facebook or Google+, citation graphs, collaboration graphs, and transportation networks.
The structure of these graphs contains valuable information but their size makes them very hard to analyze. We need special algorithm that analyze the graph structure via random sampling.
The main objective of the proposed project is to advance our understanding of the foundations of sampling processes for the analysis of the structure of large graphs. We will use the approach of Property Testing, a theoretical framework to analyze such sampling algorithms.
Max ERC Funding
1 475 306 €
Duration
Start date: 2012-12-01, End date: 2018-11-30
Project acronym UNIQUE
Project Non-equilibrium Information and Capacity Envelopes: Towards a Unified Information and Queueing Theory
Researcher (PI) Markus Fidler
Host Institution (HI) GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVER
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary Dating back sixty years to the seminal works by Shannon, information theory is a cornerstone of communications. Amongst others, it's significance stems from the decoupling of data compression and transmission as accomplished by the celebrated source and channel coding theorems. The success has, however, not been brought forward to communications networks. Yet, particular advances, such as in cross-layer optimization and network coding, show the tremendous potential that may be accessible by a network information theory.
A major challenge for establishing a network information theory is due to the properties of network data traffic that is highly variable (sporadic) and delay-sensitive. In contrast, information theory mostly neglects the dynamics of information and capacity and focuses on averages, respectively, asymptotic limits. Typically, these limits can be achieved with infinitesimally small probability of error assuming, however, arbitrarily long codewords (coding delays). Queueing theory, on the other hand, is employed to analyze network delays using (stochastic) models of a network's traffic arrivals and service. To date a tight link between these models and the information theoretic concepts of entropy and channel capacity is missing.
The goal of this project is to contribute elements of a network information theory that bridge the gap towards communications (queueing) networks. To this end, we use concepts from information theory to explore the dynamics of sources and channels. Our approach envisions envelope functions of information and capacity that have the ability to model the impact of the timescale, and that converge in the limit to the entropy and the channel capacity, respectively. The model will enable queueing theoretical investigations, permitting us to make significant contributions to the field of network information theory, and to provide substantial, new insights and applications from a holistic analysis of communications networks.
Summary
Dating back sixty years to the seminal works by Shannon, information theory is a cornerstone of communications. Amongst others, it's significance stems from the decoupling of data compression and transmission as accomplished by the celebrated source and channel coding theorems. The success has, however, not been brought forward to communications networks. Yet, particular advances, such as in cross-layer optimization and network coding, show the tremendous potential that may be accessible by a network information theory.
A major challenge for establishing a network information theory is due to the properties of network data traffic that is highly variable (sporadic) and delay-sensitive. In contrast, information theory mostly neglects the dynamics of information and capacity and focuses on averages, respectively, asymptotic limits. Typically, these limits can be achieved with infinitesimally small probability of error assuming, however, arbitrarily long codewords (coding delays). Queueing theory, on the other hand, is employed to analyze network delays using (stochastic) models of a network's traffic arrivals and service. To date a tight link between these models and the information theoretic concepts of entropy and channel capacity is missing.
The goal of this project is to contribute elements of a network information theory that bridge the gap towards communications (queueing) networks. To this end, we use concepts from information theory to explore the dynamics of sources and channels. Our approach envisions envelope functions of information and capacity that have the ability to model the impact of the timescale, and that converge in the limit to the entropy and the channel capacity, respectively. The model will enable queueing theoretical investigations, permitting us to make significant contributions to the field of network information theory, and to provide substantial, new insights and applications from a holistic analysis of communications networks.
Max ERC Funding
1 316 408 €
Duration
Start date: 2012-12-01, End date: 2017-11-30