Project acronym 4TH-NU-AVENUE
Project Search for a fourth neutrino with a PBq anti-neutrino source
Researcher (PI) Thierry Michel René Lasserre
Host Institution (HI) COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
Call Details Starting Grant (StG), PE2, ERC-2012-StG_20111012
Summary Several observed anomalies in neutrino oscillation data can be explained by a hypothetical fourth neutrino separated from the three standard neutrinos by a squared mass difference of a few eV2. This hypothesis can be tested with a PBq (ten kilocurie scale) 144Ce antineutrino beta-source deployed at the center of a large low background liquid scintillator detector, such like Borexino, KamLAND, and SNO+. In particular, the compact size of such a source could yield an energy-dependent oscillating pattern in event spatial distribution that would unambiguously determine neutrino mass differences and mixing angles.
The proposed program aims to perform the necessary research and developments to produce and deploy an intense antineutrino source in a large liquid scintillator detector. Our program will address the definition of the production process of the neutrino source as well as its experimental characterization, the detailed physics simulation of both signal and backgrounds, the complete design and the realization of the thick shielding, the preparation of the interfaces with the antineutrino detector, including the safety and security aspects.
Summary
Several observed anomalies in neutrino oscillation data can be explained by a hypothetical fourth neutrino separated from the three standard neutrinos by a squared mass difference of a few eV2. This hypothesis can be tested with a PBq (ten kilocurie scale) 144Ce antineutrino beta-source deployed at the center of a large low background liquid scintillator detector, such like Borexino, KamLAND, and SNO+. In particular, the compact size of such a source could yield an energy-dependent oscillating pattern in event spatial distribution that would unambiguously determine neutrino mass differences and mixing angles.
The proposed program aims to perform the necessary research and developments to produce and deploy an intense antineutrino source in a large liquid scintillator detector. Our program will address the definition of the production process of the neutrino source as well as its experimental characterization, the detailed physics simulation of both signal and backgrounds, the complete design and the realization of the thick shielding, the preparation of the interfaces with the antineutrino detector, including the safety and security aspects.
Max ERC Funding
1 500 000 €
Duration
Start date: 2012-10-01, End date: 2018-09-30
Project acronym ACTIVIA
Project Visual Recognition of Function and Intention
Researcher (PI) Ivan Laptev
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "Computer vision is concerned with the automated interpretation of images and video streams. Today's research is (mostly) aimed at answering queries such as ""Is this a picture of a dog?"", (classification) or sometimes ""Find the dog in this photo"" (detection). While categorisation and detection are useful for many tasks, inferring correct class labels is not the final answer to visual recognition. The categories and locations of objects do not provide direct understanding of their function i.e., how things work, what they can be used for, or how they can act and react. Such an understanding, however, would be highly desirable to answer currently unsolvable queries such as ""Am I in danger?"" or ""What can happen in this scene?"". Solving such queries is the aim of this proposal.
My goal is to uncover the functional properties of objects and the purpose of actions by addressing visual recognition from a different and yet unexplored perspective. The main novelty of this proposal is to leverage observations of people, i.e., their actions and interactions to automatically learn the use, the purpose and the function of objects and scenes from visual data. The project is timely as it builds upon the two key recent technological advances: (a) the immense progress in visual recognition of objects, scenes and human actions achieved in the last ten years, as well as (b) the emergence of a massive amount of public image and video data now available to train visual models.
ACTIVIA addresses fundamental research issues in automated interpretation of dynamic visual scenes, but its results are expected to serve as a basis for ground-breaking technological advances in practical applications. The recognition of functional properties and intentions as explored in this project will directly support high-impact applications such as detection of abnormal events, which are likely to revolutionise today's approaches to crime protection, hazard prevention, elderly care, and many others."
Summary
"Computer vision is concerned with the automated interpretation of images and video streams. Today's research is (mostly) aimed at answering queries such as ""Is this a picture of a dog?"", (classification) or sometimes ""Find the dog in this photo"" (detection). While categorisation and detection are useful for many tasks, inferring correct class labels is not the final answer to visual recognition. The categories and locations of objects do not provide direct understanding of their function i.e., how things work, what they can be used for, or how they can act and react. Such an understanding, however, would be highly desirable to answer currently unsolvable queries such as ""Am I in danger?"" or ""What can happen in this scene?"". Solving such queries is the aim of this proposal.
My goal is to uncover the functional properties of objects and the purpose of actions by addressing visual recognition from a different and yet unexplored perspective. The main novelty of this proposal is to leverage observations of people, i.e., their actions and interactions to automatically learn the use, the purpose and the function of objects and scenes from visual data. The project is timely as it builds upon the two key recent technological advances: (a) the immense progress in visual recognition of objects, scenes and human actions achieved in the last ten years, as well as (b) the emergence of a massive amount of public image and video data now available to train visual models.
ACTIVIA addresses fundamental research issues in automated interpretation of dynamic visual scenes, but its results are expected to serve as a basis for ground-breaking technological advances in practical applications. The recognition of functional properties and intentions as explored in this project will directly support high-impact applications such as detection of abnormal events, which are likely to revolutionise today's approaches to crime protection, hazard prevention, elderly care, and many others."
Max ERC Funding
1 497 420 €
Duration
Start date: 2013-01-01, End date: 2018-12-31
Project acronym ADAPT
Project Theory and Algorithms for Adaptive Particle Simulation
Researcher (PI) Stephane Redon
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2012-StG_20111012
Summary "During the twentieth century, the development of macroscopic engineering has been largely stimulated by progress in digital prototyping: cars, planes, boats, etc. are nowadays designed and tested on computers. Digital prototypes have progressively replaced actual ones, and effective computer-aided engineering tools have helped cut costs and reduce production cycles of these macroscopic systems.
The twenty-first century is most likely to see a similar development at the atomic scale. Indeed, the recent years have seen tremendous progress in nanotechnology - in particular in the ability to control matter at the atomic scale. Similar to what has happened with macroscopic engineering, powerful and generic computational tools will be needed to engineer complex nanosystems, through modeling and simulation. As a result, a major challenge is to develop efficient simulation methods and algorithms.
NANO-D, the INRIA research group I started in January 2008 in Grenoble, France, aims at developing
efficient computational methods for modeling and simulating complex nanosystems, both natural and artificial. In particular, NANO-D develops SAMSON, a software application which gathers all algorithms designed by the group and its collaborators (SAMSON: Software for Adaptive Modeling and Simulation Of Nanosystems).
In this project, I propose to develop a unified theory, and associated algorithms, for adaptive particle simulation. The proposed theory will avoid problems that plague current popular multi-scale or hybrid simulation approaches by simulating a single potential throughout the system, while allowing users to finely trade precision for computational speed.
I believe the full development of the adaptive particle simulation theory will have an important impact on current modeling and simulation practices, and will enable practical design of complex nanosystems on desktop computers, which should significantly boost the emergence of generic nano-engineering."
Summary
"During the twentieth century, the development of macroscopic engineering has been largely stimulated by progress in digital prototyping: cars, planes, boats, etc. are nowadays designed and tested on computers. Digital prototypes have progressively replaced actual ones, and effective computer-aided engineering tools have helped cut costs and reduce production cycles of these macroscopic systems.
The twenty-first century is most likely to see a similar development at the atomic scale. Indeed, the recent years have seen tremendous progress in nanotechnology - in particular in the ability to control matter at the atomic scale. Similar to what has happened with macroscopic engineering, powerful and generic computational tools will be needed to engineer complex nanosystems, through modeling and simulation. As a result, a major challenge is to develop efficient simulation methods and algorithms.
NANO-D, the INRIA research group I started in January 2008 in Grenoble, France, aims at developing
efficient computational methods for modeling and simulating complex nanosystems, both natural and artificial. In particular, NANO-D develops SAMSON, a software application which gathers all algorithms designed by the group and its collaborators (SAMSON: Software for Adaptive Modeling and Simulation Of Nanosystems).
In this project, I propose to develop a unified theory, and associated algorithms, for adaptive particle simulation. The proposed theory will avoid problems that plague current popular multi-scale or hybrid simulation approaches by simulating a single potential throughout the system, while allowing users to finely trade precision for computational speed.
I believe the full development of the adaptive particle simulation theory will have an important impact on current modeling and simulation practices, and will enable practical design of complex nanosystems on desktop computers, which should significantly boost the emergence of generic nano-engineering."
Max ERC Funding
1 476 882 €
Duration
Start date: 2012-09-01, End date: 2017-08-31
Project acronym AdS-CFT-solvable
Project Origins of integrability in AdS/CFT correspondence
Researcher (PI) Vladimir Kazakov
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Advanced Grant (AdG), PE2, ERC-2012-ADG_20120216
Summary Fundamental interactions in nature are well described by quantum gauge fields in 4 space-time dimensions (4d). When the strength of gauge interaction is weak the Feynman perturbation techniques are very efficient for the description of most of the experimentally observable consequences of the Standard model and for the study of high energy processes in QCD.
But in the intermediate and strong coupling regime, such as the relatively small energies in QCD, the perturbation theory fails leaving us with no reliable analytic methods (except the Monte-Carlo simulation). The project aims at working out new analytic and computational methods for strongly coupled gauge theories in 4d. We will employ for that two important discoveries: 1) the gauge-string duality (AdS/CFT correspondence) relating certain strongly coupled gauge Conformal Field
Theories to the weakly coupled string theories on Anty-deSitter space; 2) the solvability, or integrability of maximally supersymmetric (N=4) 4d super Yang-Mills (SYM) theory in multicolor limit. Integrability made possible pioneering exact numerical and analytic results in the N=4 multicolor SYM at any coupling, effectively summing up all 4d Feynman diagrams. Recently, we conjectured a system of functional equations - the AdS/CFT Y-system – for the exact spectrum of anomalous dimensions of all local operators in N=4 SYM. The conjecture has passed all available checks. My project is aimed at the understanding of origins of this, still mysterious integrability. Deriving the AdS/CFT Y-system from the first principles on both sides of gauge-string duality should provide a long-awaited proof of the AdS/CFT correspondence itself. I plan to use the Y-system to study the systematic weak and strong coupling expansions and the so called BFKL limit, as well as for calculation of multi-point correlation functions of N=4 SYM. We hope on new insights into the strong coupling dynamics of less supersymmetric gauge theories and of QCD.
Summary
Fundamental interactions in nature are well described by quantum gauge fields in 4 space-time dimensions (4d). When the strength of gauge interaction is weak the Feynman perturbation techniques are very efficient for the description of most of the experimentally observable consequences of the Standard model and for the study of high energy processes in QCD.
But in the intermediate and strong coupling regime, such as the relatively small energies in QCD, the perturbation theory fails leaving us with no reliable analytic methods (except the Monte-Carlo simulation). The project aims at working out new analytic and computational methods for strongly coupled gauge theories in 4d. We will employ for that two important discoveries: 1) the gauge-string duality (AdS/CFT correspondence) relating certain strongly coupled gauge Conformal Field
Theories to the weakly coupled string theories on Anty-deSitter space; 2) the solvability, or integrability of maximally supersymmetric (N=4) 4d super Yang-Mills (SYM) theory in multicolor limit. Integrability made possible pioneering exact numerical and analytic results in the N=4 multicolor SYM at any coupling, effectively summing up all 4d Feynman diagrams. Recently, we conjectured a system of functional equations - the AdS/CFT Y-system – for the exact spectrum of anomalous dimensions of all local operators in N=4 SYM. The conjecture has passed all available checks. My project is aimed at the understanding of origins of this, still mysterious integrability. Deriving the AdS/CFT Y-system from the first principles on both sides of gauge-string duality should provide a long-awaited proof of the AdS/CFT correspondence itself. I plan to use the Y-system to study the systematic weak and strong coupling expansions and the so called BFKL limit, as well as for calculation of multi-point correlation functions of N=4 SYM. We hope on new insights into the strong coupling dynamics of less supersymmetric gauge theories and of QCD.
Max ERC Funding
1 456 140 €
Duration
Start date: 2013-11-01, End date: 2018-10-31
Project acronym ALCOHOLLIFECOURSE
Project Alcohol Consumption across the Life-course: Determinants and Consequences
Researcher (PI) Anne Rebecca Britton
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), LS7, ERC-2012-StG_20111109
Summary The epidemiology of alcohol use and related health consequences plays a vital role by monitoring populations’ alcohol consumption patterns and problems associated with drinking. Such studies seek to explain mechanisms linking consumption to harm and ultimately to reduce the health burden. Research needs to consider changes in drinking behaviour over the life-course. The current evidence base lacks the consideration of the complexity of lifetime consumption patterns, the predictors of change and subsequent health risks.
Aims of the study
1. To describe age-related trajectories of drinking in different settings and to determine the extent to which individual and social contextual factors, including socioeconomic position, social networks and life events influence drinking pattern trajectories.
2. To estimate the impact of drinking trajectories on physical functioning and disease and to disentangle the exposure-outcome associations in terms of a) timing, i.e. health effect of drinking patterns in early, mid and late life; and b) duration, i.e. whether the impact of drinking accumulates over time.
3. To test the bidirectional associations between health and changes in consumption over the life-course in order to estimate the relative importance of these effects and to determine the dominant temporal direction.
4. To explore mechanisms and pathways through which drinking trajectories affect health and functioning in later life and to examine the role played by potential effect modifiers of the association between drinking and poor health.
Several large, longitudinal cohort studies from European countries with repeated measures of alcohol consumption will be combined and analysed to address the aims. A new team will be formed consisting of the PI, a Research Associate and two PhD students. Dissemination will be through journals, conferences, and culminating in a one-day workshop for academics, practitioners and policy makers in the alcohol field.
Summary
The epidemiology of alcohol use and related health consequences plays a vital role by monitoring populations’ alcohol consumption patterns and problems associated with drinking. Such studies seek to explain mechanisms linking consumption to harm and ultimately to reduce the health burden. Research needs to consider changes in drinking behaviour over the life-course. The current evidence base lacks the consideration of the complexity of lifetime consumption patterns, the predictors of change and subsequent health risks.
Aims of the study
1. To describe age-related trajectories of drinking in different settings and to determine the extent to which individual and social contextual factors, including socioeconomic position, social networks and life events influence drinking pattern trajectories.
2. To estimate the impact of drinking trajectories on physical functioning and disease and to disentangle the exposure-outcome associations in terms of a) timing, i.e. health effect of drinking patterns in early, mid and late life; and b) duration, i.e. whether the impact of drinking accumulates over time.
3. To test the bidirectional associations between health and changes in consumption over the life-course in order to estimate the relative importance of these effects and to determine the dominant temporal direction.
4. To explore mechanisms and pathways through which drinking trajectories affect health and functioning in later life and to examine the role played by potential effect modifiers of the association between drinking and poor health.
Several large, longitudinal cohort studies from European countries with repeated measures of alcohol consumption will be combined and analysed to address the aims. A new team will be formed consisting of the PI, a Research Associate and two PhD students. Dissemination will be through journals, conferences, and culminating in a one-day workshop for academics, practitioners and policy makers in the alcohol field.
Max ERC Funding
1 032 815 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym ALGAME
Project Algorithms, Games, Mechanisms, and the Price of Anarchy
Researcher (PI) Elias Koutsoupias
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Summary
The objective of this proposal is to bring together a local team of young researchers who will work closely with international collaborators to advance the state of the art of Algorithmic Game Theory and open new venues of research at the interface of Computer Science, Game Theory, and Economics. The proposal consists mainly of three intertwined research strands: algorithmic mechanism design, price of anarchy, and online algorithms.
Specifically, we will attempt to resolve some outstanding open problems in algorithmic mechanism design: characterizing the incentive compatible mechanisms for important domains, such as the domain of combinatorial auctions, and resolving the approximation ratio of mechanisms for scheduling unrelated machines. More generally, we will study centralized and distributed algorithms whose inputs are controlled by selfish agents that are interested in the outcome of the computation. We will investigate new notions of mechanisms with strong truthfulness and limited susceptibility to externalities that can facilitate modular design of mechanisms of complex domains.
We will expand the current research on the price of anarchy to time-dependent games where the players can select not only how to act but also when to act. We also plan to resolve outstanding questions on the price of stability and to build a robust approach to these questions, similar to smooth analysis. For repeated games, we will investigate convergence of simple strategies (e.g., fictitious play), online fairness, and strategic considerations (e.g., metagames). More generally, our aim is to find a productive formulation of playing unknown games by drawing on the fields of online algorithms and machine learning.
Max ERC Funding
2 461 000 €
Duration
Start date: 2013-04-01, End date: 2019-03-31
Project acronym ALLEGRO
Project Active large-scale learning for visual recognition
Researcher (PI) Cordelia Schmid
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary A massive and ever growing amount of digital image and video content
is available today, on sites such as
Flickr and YouTube, in audiovisual archives such as those of BBC and
INA, and in personal collections. In most cases, it comes with
additional information, such as text, audio or other metadata, that forms a
rather sparse and noisy, yet rich and diverse source of annotation,
ideally suited to emerging weakly supervised and active machine
learning technology. The ALLEGRO project will take visual recognition
to the next level by using this largely untapped source of data to
automatically learn visual models. The main research objective of
our project is the development of new algorithms and computer software
capable of autonomously exploring evolving data collections, selecting
the relevant information, and determining the visual models most
appropriate for different object, scene, and activity categories. An
emphasis will be put on learning visual models from video, a
particularly rich source of information, and on the representation of
human activities, one of today's most challenging problems in computer
vision. Although this project addresses fundamental research
issues, it is expected to result in significant advances in
high-impact applications that range from visual mining of the Web and
automated annotation and organization of family photo and video albums
to large-scale information retrieval in television archives.
Summary
A massive and ever growing amount of digital image and video content
is available today, on sites such as
Flickr and YouTube, in audiovisual archives such as those of BBC and
INA, and in personal collections. In most cases, it comes with
additional information, such as text, audio or other metadata, that forms a
rather sparse and noisy, yet rich and diverse source of annotation,
ideally suited to emerging weakly supervised and active machine
learning technology. The ALLEGRO project will take visual recognition
to the next level by using this largely untapped source of data to
automatically learn visual models. The main research objective of
our project is the development of new algorithms and computer software
capable of autonomously exploring evolving data collections, selecting
the relevant information, and determining the visual models most
appropriate for different object, scene, and activity categories. An
emphasis will be put on learning visual models from video, a
particularly rich source of information, and on the representation of
human activities, one of today's most challenging problems in computer
vision. Although this project addresses fundamental research
issues, it is expected to result in significant advances in
high-impact applications that range from visual mining of the Web and
automated annotation and organization of family photo and video albums
to large-scale information retrieval in television archives.
Max ERC Funding
2 493 322 €
Duration
Start date: 2013-04-01, End date: 2019-03-31
Project acronym ANTINEUTRINONOVA
Project Probing Fundamental Physics with Antineutrinos at the NOvA Experiment
Researcher (PI) Jeffrey Hartnell
Host Institution (HI) THE UNIVERSITY OF SUSSEX
Call Details Starting Grant (StG), PE2, ERC-2012-StG_20111012
Summary "This proposal addresses major questions in particle physics that are at the forefront of experimental and theoretical physics research today. The results offered would have far-reaching implications in other fields such as cosmology and could help answer some of the big questions such as why the universe contains so much more matter than antimatter. The research objectives of this proposal are to (i) make world-leading tests of CPT symmetry and (ii) discover the neutrino mass hierarchy and search for indications of leptonic CP violation.
The NOvA long-baseline neutrino oscillation experiment will use a novel ""totally active scintillator design"" for the detector technology and will be exposed to the world's highest power neutrino beam. Building on the first direct observation of muon antineutrino disappearance (that was made by a group founded and led by the PI at the MINOS experiment), tests of CPT symmetry will be performed by looking for differences in the mass squared splittings and mixing angles between neutrinos and antineutrinos. The potential to discover the mass hierarchy is unique to NOvA on the timescale of this proposal due to the long 810 km baseline and the well measured beam of neutrinos and antineutrinos.
This proposal addresses several key challenges in a long-baseline neutrino oscillation experiment with the following tasks: (i) development of a new approach to event energy reconstruction that is expected to have widespread applicability for future neutrino experiments; (ii) undertaking a comprehensive calibration project, exploiting a novel technique developed by the PI, that will be essential to achieving the physics goals; (iii) development of a sophisticated statistical analyses.
The results promised in this proposal surpass the sensitivity to antineutrino oscillation parameters of current 1st generation experiments by at least an order of magnitude, offering wide scope for profound discoveries with implications across disciplines."
Summary
"This proposal addresses major questions in particle physics that are at the forefront of experimental and theoretical physics research today. The results offered would have far-reaching implications in other fields such as cosmology and could help answer some of the big questions such as why the universe contains so much more matter than antimatter. The research objectives of this proposal are to (i) make world-leading tests of CPT symmetry and (ii) discover the neutrino mass hierarchy and search for indications of leptonic CP violation.
The NOvA long-baseline neutrino oscillation experiment will use a novel ""totally active scintillator design"" for the detector technology and will be exposed to the world's highest power neutrino beam. Building on the first direct observation of muon antineutrino disappearance (that was made by a group founded and led by the PI at the MINOS experiment), tests of CPT symmetry will be performed by looking for differences in the mass squared splittings and mixing angles between neutrinos and antineutrinos. The potential to discover the mass hierarchy is unique to NOvA on the timescale of this proposal due to the long 810 km baseline and the well measured beam of neutrinos and antineutrinos.
This proposal addresses several key challenges in a long-baseline neutrino oscillation experiment with the following tasks: (i) development of a new approach to event energy reconstruction that is expected to have widespread applicability for future neutrino experiments; (ii) undertaking a comprehensive calibration project, exploiting a novel technique developed by the PI, that will be essential to achieving the physics goals; (iii) development of a sophisticated statistical analyses.
The results promised in this proposal surpass the sensitivity to antineutrino oscillation parameters of current 1st generation experiments by at least an order of magnitude, offering wide scope for profound discoveries with implications across disciplines."
Max ERC Funding
1 415 848 €
Duration
Start date: 2012-10-01, End date: 2018-09-30
Project acronym ARCHOFCON
Project The Architecture of Consciousness
Researcher (PI) Timothy John Bayne
Host Institution (HI) THE UNIVERSITY OF MANCHESTER
Call Details Starting Grant (StG), SH4, ERC-2012-StG_20111124
Summary The nature of consciousness is one of the great unsolved mysteries of science. Although the global research effort dedicated to explaining how consciousness arises from neural and cognitive activity is now more than two decades old, as yet there is no widely accepted theory of consciousness. One reason for why no adequate theory of consciousness has yet been found is that there is a lack of clarity about what exactly a theory of consciousness needs to explain. What is needed is thus a model of the general features of consciousness — a model of the ‘architecture’ of consciousness — that will systematize the structural differences between conscious states, processes and creatures on the one hand and unconscious states, processes and creatures on the other. The aim of this project is to remove one of the central impediments to the progress of the science of consciousness by constructing such a model.
A great many of the data required for this task already exist, but these data concern different aspects of consciousness and are distributed across many disciplines. As a result, there have been few attempts to develop a truly comprehensive model of the architecture of consciousness. This project will overcome the limitations of previous work by drawing on research in philosophy, psychology, psychiatry, and cognitive neuroscience to develop a model of the architecture of consciousness that is structured around five of its core features: its subjectivity, its temporality, its unity, its selectivity, and its dimensionality (that is, the relationship between the levels of consciousness and the contents of consciousness). By providing a comprehensive characterization of what a theory of consciousness needs to explain, this project will provide a crucial piece of the puzzle of consciousness, enabling future generations of researchers to bridge the gap between raw data on the one hand and a full-blown theory of consciousness on the other
Summary
The nature of consciousness is one of the great unsolved mysteries of science. Although the global research effort dedicated to explaining how consciousness arises from neural and cognitive activity is now more than two decades old, as yet there is no widely accepted theory of consciousness. One reason for why no adequate theory of consciousness has yet been found is that there is a lack of clarity about what exactly a theory of consciousness needs to explain. What is needed is thus a model of the general features of consciousness — a model of the ‘architecture’ of consciousness — that will systematize the structural differences between conscious states, processes and creatures on the one hand and unconscious states, processes and creatures on the other. The aim of this project is to remove one of the central impediments to the progress of the science of consciousness by constructing such a model.
A great many of the data required for this task already exist, but these data concern different aspects of consciousness and are distributed across many disciplines. As a result, there have been few attempts to develop a truly comprehensive model of the architecture of consciousness. This project will overcome the limitations of previous work by drawing on research in philosophy, psychology, psychiatry, and cognitive neuroscience to develop a model of the architecture of consciousness that is structured around five of its core features: its subjectivity, its temporality, its unity, its selectivity, and its dimensionality (that is, the relationship between the levels of consciousness and the contents of consciousness). By providing a comprehensive characterization of what a theory of consciousness needs to explain, this project will provide a crucial piece of the puzzle of consciousness, enabling future generations of researchers to bridge the gap between raw data on the one hand and a full-blown theory of consciousness on the other
Max ERC Funding
1 477 483 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym BIMPC
Project Biologically-Inspired Massively-Parallel Computation
Researcher (PI) Stephen Byram Furber
Host Institution (HI) THE UNIVERSITY OF MANCHESTER
Call Details Advanced Grant (AdG), PE6, ERC-2012-ADG_20120216
Summary "We aim to establish a world-leading research capability in Europe for advancing novel models of asynchronous computation based upon principles inspired by brain function. This work will accelerate progress towards an understanding of how the potential of brain-inspired many-core architectures may be harnessed. The results will include new brain-inspired models of asynchronous computation and new brain- inspired approaches to fault-tolerance and reliability in complex computer systems.
Many-core processors are now established as the way forward for computing from embedded systems to supercomputers. An emerging problem with leading-edge silicon technology is a reduction in the yield and reliability of modern processors due to high variability in the manufacture of the components and interconnect as transistor geometries shrink towards atomic scales. We are faced with the longstanding problem of how to make use of a potentially large array of parallel processors, but with the new constraint that the individual elements are the system are inherently unreliable.
The human brain remains as one of the great frontiers of science – how does this organ upon which we all depend so critically actually do its job? A great deal is known about the underlying technology – the neuron – and we can observe large-scale brain activity through techniques such as magnetic resonance imaging, but this knowledge barely starts to tell us how the brain works. Something is happening at the intermediate levels of processing that we have yet to begin to understand, but the essence of the brain's massively-parallel information processing capabilities and robustness to component failure lies in these intermediate levels.
These two issues draws us towards two high-level research questions:
• Can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computing?
• Can massively parallel computing resources accelerate our understanding of brain function"
Summary
"We aim to establish a world-leading research capability in Europe for advancing novel models of asynchronous computation based upon principles inspired by brain function. This work will accelerate progress towards an understanding of how the potential of brain-inspired many-core architectures may be harnessed. The results will include new brain-inspired models of asynchronous computation and new brain- inspired approaches to fault-tolerance and reliability in complex computer systems.
Many-core processors are now established as the way forward for computing from embedded systems to supercomputers. An emerging problem with leading-edge silicon technology is a reduction in the yield and reliability of modern processors due to high variability in the manufacture of the components and interconnect as transistor geometries shrink towards atomic scales. We are faced with the longstanding problem of how to make use of a potentially large array of parallel processors, but with the new constraint that the individual elements are the system are inherently unreliable.
The human brain remains as one of the great frontiers of science – how does this organ upon which we all depend so critically actually do its job? A great deal is known about the underlying technology – the neuron – and we can observe large-scale brain activity through techniques such as magnetic resonance imaging, but this knowledge barely starts to tell us how the brain works. Something is happening at the intermediate levels of processing that we have yet to begin to understand, but the essence of the brain's massively-parallel information processing capabilities and robustness to component failure lies in these intermediate levels.
These two issues draws us towards two high-level research questions:
• Can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computing?
• Can massively parallel computing resources accelerate our understanding of brain function"
Max ERC Funding
2 399 761 €
Duration
Start date: 2013-03-01, End date: 2018-02-28