Project acronym Becoming Social
Project Social Interaction Perception and the Social Brain Across Typical and Atypical Development
Researcher (PI) Kami KOLDEWYN
Host Institution (HI) BANGOR UNIVERSITY
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary Social interactions are multifaceted and subtle, yet we can almost instantaneously discern if two people are cooperating or competing, flirting or fighting, or helping or hindering each other. Surprisingly, the development and brain basis of this remarkable ability has remained largely unexplored. At the same time, understanding how we develop the ability to process and use social information from other people is widely recognized as a core challenge facing developmental cognitive neuroscience. The Becoming Social project meets this challenge by proposing the most complete investigation to date of the development of the behavioural and neurobiological systems that support complex social perception. To achieve this, we first systematically map how the social interactions we observe are coded in the brain by testing typical adults. Next, we investigate developmental change both behaviourally and neurally during a key stage in social development in typically developing children. Finally, we explore whether social interaction perception is clinically relevant by investigating it developmentally in autism spectrum disorder. The Becoming Social project is expected to lead to a novel conception of the neurocognitive architecture supporting the perception of social interactions. In addition, neuroimaging and behavioural tasks measured longitudinally during development will allow us to determine how individual differences in brain and behaviour are causally related to real-world social ability and social learning. The planned studies as well as those generated during the project will enable the Becoming Social team to become a world-leading group bridging social cognition, neuroscience and developmental psychology.
Summary
Social interactions are multifaceted and subtle, yet we can almost instantaneously discern if two people are cooperating or competing, flirting or fighting, or helping or hindering each other. Surprisingly, the development and brain basis of this remarkable ability has remained largely unexplored. At the same time, understanding how we develop the ability to process and use social information from other people is widely recognized as a core challenge facing developmental cognitive neuroscience. The Becoming Social project meets this challenge by proposing the most complete investigation to date of the development of the behavioural and neurobiological systems that support complex social perception. To achieve this, we first systematically map how the social interactions we observe are coded in the brain by testing typical adults. Next, we investigate developmental change both behaviourally and neurally during a key stage in social development in typically developing children. Finally, we explore whether social interaction perception is clinically relevant by investigating it developmentally in autism spectrum disorder. The Becoming Social project is expected to lead to a novel conception of the neurocognitive architecture supporting the perception of social interactions. In addition, neuroimaging and behavioural tasks measured longitudinally during development will allow us to determine how individual differences in brain and behaviour are causally related to real-world social ability and social learning. The planned studies as well as those generated during the project will enable the Becoming Social team to become a world-leading group bridging social cognition, neuroscience and developmental psychology.
Max ERC Funding
1 500 000 €
Duration
Start date: 2017-04-01, End date: 2022-03-31
Project acronym DEVBRAINTRAIN
Project Neurocognitive mechanisms of inhibitory control training and transfer effects in children
Researcher (PI) Nikolaus STEINBEIS
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary Inhibitory control refers to the ability to control behavioural impulses and is critical for cognitive development. It has been traditionally thought of as a stable trait across the lifespan but recent insights from cognitive neuroscience show prolonged changes in brain regions that support inhibitory control indicating greater malleability than previously believed. Because childhood inhibitory control predicts well-being later in life this suggests exciting opportunities for enhancing inhibitory control. I build on highly promising pilot results and draw on a recent neurocognitive model of inhibitory control to test 1) if inhibitory control can be enhanced during childhood, 2) if this transfers onto other domains important for healthy psychological development such as prosocial- and patient decision-making and academic achievement and 3) which factors predict training success. Children aged 5 to 10 years will undergo 8 weeks of inhibitory control training, which is a critical duration for observing prolonged training effects and be compared to a group undergoing active sham-training of comparable stimuli and duration but without inhibition. I will assess training effects on the brain and look at transfer effects onto other domains such as other executive functions, prosocial- and patient decision-making and academic achievement, both immediately and 1 year after training. I expect training to 1) improve inhibitory control, 2) transfer onto performance on above-mentioned domains and 3) elicit neural changes indicating the effectiveness of training for re- and proactive control. I also expect that individual differences in inhibitory control ability and associated brain regions prior to training will predict training success. The proposed research has the potential to generate a new and ground-breaking framework on early malleability of inhibitory control with implications for interventions at the time point of greatest likely impact.
Summary
Inhibitory control refers to the ability to control behavioural impulses and is critical for cognitive development. It has been traditionally thought of as a stable trait across the lifespan but recent insights from cognitive neuroscience show prolonged changes in brain regions that support inhibitory control indicating greater malleability than previously believed. Because childhood inhibitory control predicts well-being later in life this suggests exciting opportunities for enhancing inhibitory control. I build on highly promising pilot results and draw on a recent neurocognitive model of inhibitory control to test 1) if inhibitory control can be enhanced during childhood, 2) if this transfers onto other domains important for healthy psychological development such as prosocial- and patient decision-making and academic achievement and 3) which factors predict training success. Children aged 5 to 10 years will undergo 8 weeks of inhibitory control training, which is a critical duration for observing prolonged training effects and be compared to a group undergoing active sham-training of comparable stimuli and duration but without inhibition. I will assess training effects on the brain and look at transfer effects onto other domains such as other executive functions, prosocial- and patient decision-making and academic achievement, both immediately and 1 year after training. I expect training to 1) improve inhibitory control, 2) transfer onto performance on above-mentioned domains and 3) elicit neural changes indicating the effectiveness of training for re- and proactive control. I also expect that individual differences in inhibitory control ability and associated brain regions prior to training will predict training success. The proposed research has the potential to generate a new and ground-breaking framework on early malleability of inhibitory control with implications for interventions at the time point of greatest likely impact.
Max ERC Funding
1 500 000 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym ECOLANG
Project Ecological Language: A multimodal approach to language and the brain
Researcher (PI) Gabriella VIGLIOCCO
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Advanced Grant (AdG), SH4, ERC-2016-ADG
Summary The human brain has evolved the ability to support communication in complex and dynamic environments. In such environments, language is learned, and mostly used in face-to-face contexts in which processing and learning is based on multiple cues: linguistic (such as lexical, syntactic), but also discourse, prosody, face and hands (gestures). Yet, our understanding of how language is learnt and processed, and its associated neural circuitry, comes almost exclusively from reductionist approaches in which the multimodal signal is reduced to speech or text. ECOLANG will pioneer a new way to study language comprehension and learning using a real-world approach in which language is analysed in its rich face-to-face multimodal environment (i.e., language’s ecological niche). Experimental rigour is not compromised by the use of innovative technologies (combining automatic, manual and crowdsourcing methods for annotation; creating avatar stimuli for our experiments) and state-of-the-art modelling and data analysis (probabilistic modelling and network-based analyses). ECOLANG studies how the different cues available in face-to-face communication dynamically contribute to processing and learning in adults, children and aphasic patients in contexts representative of everyday conversation. We collect and annotate a corpus of naturalistic language which is then used to derive quantitative informativeness measures for each cue and their combination using computational models, tested and refined on the basis of behavioural and neuroscientific data. We use converging methodologies (behavioural, EEG, fMRI and lesion-symptom mapping) and we investigate different populations (3-4 years old children, healthy and aphasic adults) in order to develop mechanistic accounts of multimodal communication at the cognitive as well as neural level that can explain processing and learning (by both children and adults) and can have impact on the rehabilitation of language functions after stroke.
Summary
The human brain has evolved the ability to support communication in complex and dynamic environments. In such environments, language is learned, and mostly used in face-to-face contexts in which processing and learning is based on multiple cues: linguistic (such as lexical, syntactic), but also discourse, prosody, face and hands (gestures). Yet, our understanding of how language is learnt and processed, and its associated neural circuitry, comes almost exclusively from reductionist approaches in which the multimodal signal is reduced to speech or text. ECOLANG will pioneer a new way to study language comprehension and learning using a real-world approach in which language is analysed in its rich face-to-face multimodal environment (i.e., language’s ecological niche). Experimental rigour is not compromised by the use of innovative technologies (combining automatic, manual and crowdsourcing methods for annotation; creating avatar stimuli for our experiments) and state-of-the-art modelling and data analysis (probabilistic modelling and network-based analyses). ECOLANG studies how the different cues available in face-to-face communication dynamically contribute to processing and learning in adults, children and aphasic patients in contexts representative of everyday conversation. We collect and annotate a corpus of naturalistic language which is then used to derive quantitative informativeness measures for each cue and their combination using computational models, tested and refined on the basis of behavioural and neuroscientific data. We use converging methodologies (behavioural, EEG, fMRI and lesion-symptom mapping) and we investigate different populations (3-4 years old children, healthy and aphasic adults) in order to develop mechanistic accounts of multimodal communication at the cognitive as well as neural level that can explain processing and learning (by both children and adults) and can have impact on the rehabilitation of language functions after stroke.
Max ERC Funding
2 243 584 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym EmbodiedTech
Project Can humans embody augmentative robotics technology?
Researcher (PI) Tamar Rebecca MAKIN
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary Wearable technology is redefining the boundaries of our own body. Wearable robotic (WR) fingers and arms are robots, designed to free up or complement our hand actions, to enhance humans’ abilities. While tremendous resources are being dedicated to the development of this groundbreaking technology, little notice is given to how the human brain might support it. The intuitive, though unfounded, view is that technology will fuse with our bodies, allowing our brains to seamlessly control it (i.e. embodied technology). This implies that our brain will share resources, originally devoted to controlling our body, to operate WRs. Here I will elucidate the conditions necessary for technological embodiment, using prosthetic limbs as a model. I will build upon knowledge gained from rehabilitation, experimental psychology and neuroscience to characterise and extend the boundaries of body representation towards successful adoption of WRs. I will combine behavioural, physiological and neuroimaging tools to address five key questions that are currently obscuring the vision of embodied technology: What conditions are necessary for a person to experience an artificial limb as part of their body? Would the resources recruited to control an artificial limb be shared, or rather conflict, with human body representation? Will the successful incorporation of WRs disorganise representations of the human limbs? Can new sensory experiences (touch) be intuitively inferred from WRs? Can the adult brain support the increased motor and cognitive demands associated with successful WRs usage? I will first focus on populations with congenital and acquired hand loss, who differ in brain resources due to plasticity, but experience similar daily-life challenges. I will then test body representation in able-bodied people while learning to use WR fingers and arm. Together, my research will provide the first foundation for guiding how to successfully incorporate technology into our body representation.
Summary
Wearable technology is redefining the boundaries of our own body. Wearable robotic (WR) fingers and arms are robots, designed to free up or complement our hand actions, to enhance humans’ abilities. While tremendous resources are being dedicated to the development of this groundbreaking technology, little notice is given to how the human brain might support it. The intuitive, though unfounded, view is that technology will fuse with our bodies, allowing our brains to seamlessly control it (i.e. embodied technology). This implies that our brain will share resources, originally devoted to controlling our body, to operate WRs. Here I will elucidate the conditions necessary for technological embodiment, using prosthetic limbs as a model. I will build upon knowledge gained from rehabilitation, experimental psychology and neuroscience to characterise and extend the boundaries of body representation towards successful adoption of WRs. I will combine behavioural, physiological and neuroimaging tools to address five key questions that are currently obscuring the vision of embodied technology: What conditions are necessary for a person to experience an artificial limb as part of their body? Would the resources recruited to control an artificial limb be shared, or rather conflict, with human body representation? Will the successful incorporation of WRs disorganise representations of the human limbs? Can new sensory experiences (touch) be intuitively inferred from WRs? Can the adult brain support the increased motor and cognitive demands associated with successful WRs usage? I will first focus on populations with congenital and acquired hand loss, who differ in brain resources due to plasticity, but experience similar daily-life challenges. I will then test body representation in able-bodied people while learning to use WR fingers and arm. Together, my research will provide the first foundation for guiding how to successfully incorporate technology into our body representation.
Max ERC Funding
1 499 406 €
Duration
Start date: 2017-02-01, End date: 2022-01-31
Project acronym FREEMIND
Project FREE the MIND: the neurocognitive determinants of intentional decision
Researcher (PI) Jiaxiang ZHANG
Host Institution (HI) CARDIFF UNIVERSITY
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary Acting based on intention is a fundamental ability to our lives. Apple or orange, cash or card: we constantly make intentional decisions to fulfil our desires, even when the options have no explicit difference in their rewards. Recently, I and others have offered the first evidence to support that intentional decision and externally guided decision share similar computational principles. However, how the brain implements these principles for intentional decision remains unknown.
This project aims to establish a multilevel understanding of intentional decision, spanning from neurons to brain networks to behaviour, through a powerful combination of novel paradigms, cutting-edge brain imaging, and innovative methods. Central to my approach is formal computational modelling, allowing me to establish a quantitative link between data and theory at multiple levels of abstraction. Subproject 1 will ask which brain regions encode intentional information, when intentional processes occur, and how neurochemical concentration influences intentional decision. Subproject 2 will focus on theoretically predicted changes in intentional decision under behavioural and neural interventions. I will use brain imaging and brain stimulation to test the flexibility of intentional decision within individuals. Subproject 3 will launch the largest study to date on intentional decision. I will characterize individual differences in intentional decision from 2,000 representative samples. I will then investigate, with high statistical power, the contributions of neurochemistry and brain microstructure to individual differences in intentional decision. This project premises to establish the first neurobiological theory of intentional behaviour, and provide mechanistic understanding of its changes within and between individuals. The new theory and innovative methodology will open further research possibilities to explore intentional deficits in diseases, and the neural basis of human volition.
Summary
Acting based on intention is a fundamental ability to our lives. Apple or orange, cash or card: we constantly make intentional decisions to fulfil our desires, even when the options have no explicit difference in their rewards. Recently, I and others have offered the first evidence to support that intentional decision and externally guided decision share similar computational principles. However, how the brain implements these principles for intentional decision remains unknown.
This project aims to establish a multilevel understanding of intentional decision, spanning from neurons to brain networks to behaviour, through a powerful combination of novel paradigms, cutting-edge brain imaging, and innovative methods. Central to my approach is formal computational modelling, allowing me to establish a quantitative link between data and theory at multiple levels of abstraction. Subproject 1 will ask which brain regions encode intentional information, when intentional processes occur, and how neurochemical concentration influences intentional decision. Subproject 2 will focus on theoretically predicted changes in intentional decision under behavioural and neural interventions. I will use brain imaging and brain stimulation to test the flexibility of intentional decision within individuals. Subproject 3 will launch the largest study to date on intentional decision. I will characterize individual differences in intentional decision from 2,000 representative samples. I will then investigate, with high statistical power, the contributions of neurochemistry and brain microstructure to individual differences in intentional decision. This project premises to establish the first neurobiological theory of intentional behaviour, and provide mechanistic understanding of its changes within and between individuals. The new theory and innovative methodology will open further research possibilities to explore intentional deficits in diseases, and the neural basis of human volition.
Max ERC Funding
1 487 908 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym INtheSELF
Project Inside the Self: from interoception to self- and other-awareness
Researcher (PI) EMMANOUIL TSAKIRIS
Host Institution (HI) ROYAL HOLLOWAY AND BEDFORD NEW COLLEGE
Call Details Consolidator Grant (CoG), SH4, ERC-2016-COG
Summary Modern psychology has long focused on the importance of the body as the basis of the self. However, this focus concerned the exteroceptive body, that is, the body as perceived from the outside, as when we recognize ourselves in the mirror. This influential approach has neglected another important dimension of the body, namely the interoceptive body, that is, the body as perceived from within, as for example when one feels her racing heart. In psychology, research on interoception has focused mainly on its role in emotion. INtheSELF, however, goes beyond this approach, aiming instead to show how interoception and interoceptive awareness serve the unity and stability of the self, analogous to the role of interoception in maintaining physiological homeostasis. To test this hypothesis we go beyond the division between interoception and exteroception to consider their integration in self-awareness. INtheSELF will develop novel, pioneering methods for the study of causal relationships between interoceptive and exteroceptive awareness (WP1), allowing us to test how these two sources of information about the self interact to reflect the balance between stability and adaptation (WP2); how their inter-relation is built in parallel to the development of self-awareness in early childhood and adolescence (WP3); and the role that their interaction has for social relatedness (WP4). INtheSELF provides an alternative to existing psychological theories of the self insofar it goes beyond the apparent antagonism between the awareness of the self from the outside and from within, to consider their dynamic integration. In doing so, INtheSELF aims to elucidate for the first time how humans navigate the challenging balance between inside and out, in terms of both the individual’s natural (interoception vs. exteroception) and social (self vs. others) embodiment in the world.
Summary
Modern psychology has long focused on the importance of the body as the basis of the self. However, this focus concerned the exteroceptive body, that is, the body as perceived from the outside, as when we recognize ourselves in the mirror. This influential approach has neglected another important dimension of the body, namely the interoceptive body, that is, the body as perceived from within, as for example when one feels her racing heart. In psychology, research on interoception has focused mainly on its role in emotion. INtheSELF, however, goes beyond this approach, aiming instead to show how interoception and interoceptive awareness serve the unity and stability of the self, analogous to the role of interoception in maintaining physiological homeostasis. To test this hypothesis we go beyond the division between interoception and exteroception to consider their integration in self-awareness. INtheSELF will develop novel, pioneering methods for the study of causal relationships between interoceptive and exteroceptive awareness (WP1), allowing us to test how these two sources of information about the self interact to reflect the balance between stability and adaptation (WP2); how their inter-relation is built in parallel to the development of self-awareness in early childhood and adolescence (WP3); and the role that their interaction has for social relatedness (WP4). INtheSELF provides an alternative to existing psychological theories of the self insofar it goes beyond the apparent antagonism between the awareness of the self from the outside and from within, to consider their dynamic integration. In doing so, INtheSELF aims to elucidate for the first time how humans navigate the challenging balance between inside and out, in terms of both the individual’s natural (interoception vs. exteroception) and social (self vs. others) embodiment in the world.
Max ERC Funding
1 998 394 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym JOINTATT
Project The evolutionary and developmental origins of Joint Attention: a longitudinal cross-species and cross-cultural comparison
Researcher (PI) Kathryn Elizabeth SLOCOMBE
Host Institution (HI) UNIVERSITY OF YORK
Call Details Consolidator Grant (CoG), SH4, ERC-2016-COG
Summary Humans frequently coordinate and share attention about objects and events. Our basic ability to engage in joint attention (JA) is thought to underpin our uniquely complex cooperation skills and language, raising the possibility that the emergence of JA was a ‘small change that made a big difference’ in the evolution of human cognition. Despite the theoretical importance of JA for understanding human social cognition, we know surprisingly little about JA across species and cultures. Methodological shortcomings limit our understanding of the extent to which JA is uniquely human or shared with our primate cousins, and we lack data on how this ability develops in non-western cultures, which aspects of the social environment are necessary for JA to emerge and how JA is related to the emergence of cooperation. The JOINTATT project will address these four key issues by collecting longitudinal data on mother-infant dyads over the first 2 years of the infant’s life, across four different study groups: Ugandan and British humans; wild chimpanzees and crested macaque monkeys. The project will develop novel tasks and measures that allow the same set of data to be collected in directly comparable ways across species and provide the first valid, rigorous test of whether engagement in JA is a uniquely human trait. Data from the two human groups will test how different elements of JA are related and whether JA develops in a uniform way across cultures. Longitudinal data on mother-infant interactions and the infant’s environment will be related to performance on JA tasks across all four groups, enabling us to identify conditions that are likely necessary for JA to emerge. Performance on JA and cooperative tasks will be compared to assess whether engagement in JA predicts the later emergence of cooperation. This project will provide ground-breaking insights into JA and its evolutionary origins, and is likely to challenge current theories of how human social cognition evolved.
Summary
Humans frequently coordinate and share attention about objects and events. Our basic ability to engage in joint attention (JA) is thought to underpin our uniquely complex cooperation skills and language, raising the possibility that the emergence of JA was a ‘small change that made a big difference’ in the evolution of human cognition. Despite the theoretical importance of JA for understanding human social cognition, we know surprisingly little about JA across species and cultures. Methodological shortcomings limit our understanding of the extent to which JA is uniquely human or shared with our primate cousins, and we lack data on how this ability develops in non-western cultures, which aspects of the social environment are necessary for JA to emerge and how JA is related to the emergence of cooperation. The JOINTATT project will address these four key issues by collecting longitudinal data on mother-infant dyads over the first 2 years of the infant’s life, across four different study groups: Ugandan and British humans; wild chimpanzees and crested macaque monkeys. The project will develop novel tasks and measures that allow the same set of data to be collected in directly comparable ways across species and provide the first valid, rigorous test of whether engagement in JA is a uniquely human trait. Data from the two human groups will test how different elements of JA are related and whether JA develops in a uniform way across cultures. Longitudinal data on mother-infant interactions and the infant’s environment will be related to performance on JA tasks across all four groups, enabling us to identify conditions that are likely necessary for JA to emerge. Performance on JA and cooperative tasks will be compared to assess whether engagement in JA predicts the later emergence of cooperation. This project will provide ground-breaking insights into JA and its evolutionary origins, and is likely to challenge current theories of how human social cognition evolved.
Max ERC Funding
1 989 611 €
Duration
Start date: 2017-07-01, End date: 2022-06-30
Project acronym M and M
Project Generalization in Mind and Machine
Researcher (PI) jeffrey BOWERS
Host Institution (HI) UNIVERSITY OF BRISTOL
Call Details Advanced Grant (AdG), SH4, ERC-2016-ADG
Summary Is the human mind a symbolic computational device? This issue was at the core Chomsky’s critique of Skinner in the 1960s, and motivated the debates regarding Parallel Distributed Processing models developed in the 1980s. The recent successes of “deep” networks make this issue topical for psychology and neuroscience, and it raises the question of whether symbols are needed for artificial intelligence more generally.
One of the innovations of the current project is to identify simple empirical phenomena that will serve a critical test-bed for both symbolic and non-symbolic neural networks. In order to make substantial progress on this issue a series of empirical and computational investigations are organised as follows. First, studies focus on tasks that, according to proponents of symbolic systems, require symbols for the sake of generalisation. Accordingly, if non-symbolic networks succeed, it would undermine one of the main motivations for symbolic systems. Second, studies focus on generalisation in tasks in which human performance is well characterised. Accordingly, the research will provide important constraints for theories of cognition across a range of domains, including vision, memory, and reasoning. Third, studies develop new learning algorithms designed to make symbolic systems biologically plausible. One of the reasons why symbolic networks are often dismissed is the claim that they are not as biologically plausible as non-symbolic models. This last ambition is the most high-risk but also potentially the most important: Introducing new computational principles may fundamentally advance our understanding of how the brain learns and computes, and furthermore, these principles may increase the computational powers of networks in ways that are important for engineering and artificial intelligence.
Summary
Is the human mind a symbolic computational device? This issue was at the core Chomsky’s critique of Skinner in the 1960s, and motivated the debates regarding Parallel Distributed Processing models developed in the 1980s. The recent successes of “deep” networks make this issue topical for psychology and neuroscience, and it raises the question of whether symbols are needed for artificial intelligence more generally.
One of the innovations of the current project is to identify simple empirical phenomena that will serve a critical test-bed for both symbolic and non-symbolic neural networks. In order to make substantial progress on this issue a series of empirical and computational investigations are organised as follows. First, studies focus on tasks that, according to proponents of symbolic systems, require symbols for the sake of generalisation. Accordingly, if non-symbolic networks succeed, it would undermine one of the main motivations for symbolic systems. Second, studies focus on generalisation in tasks in which human performance is well characterised. Accordingly, the research will provide important constraints for theories of cognition across a range of domains, including vision, memory, and reasoning. Third, studies develop new learning algorithms designed to make symbolic systems biologically plausible. One of the reasons why symbolic networks are often dismissed is the claim that they are not as biologically plausible as non-symbolic models. This last ambition is the most high-risk but also potentially the most important: Introducing new computational principles may fundamentally advance our understanding of how the brain learns and computes, and furthermore, these principles may increase the computational powers of networks in ways that are important for engineering and artificial intelligence.
Max ERC Funding
2 495 578 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym MECHIDENT
Project Who is that? Neural networks and mechanisms for identifying individuals
Researcher (PI) CHRISTOPHER ILIEV PETKOV
Host Institution (HI) UNIVERSITY OF NEWCASTLE UPON TYNE
Call Details Consolidator Grant (CoG), SH4, ERC-2016-COG
Summary Our social interactions and survival critically depend on identifying specific individuals to interact with or avoid (“who is that?”). Identifying individuals can be achieved by different sensory inputs, and by many accounts any sensory input elicits a representation of an individual that somehow becomes transmodal or independent of any sensory system. However, how the brain achieves transmodal integration facilitating individual recognition remains a mystery: Investigations in humans allowing direct access to site-specific neuronal processes are generally rare and have not focused on understanding neuronal multisensory integration for person recognition. Also, animal models to study the neuronal mechanisms of related processes have only recently become known. I propose to use direct recordings of neuronal activity in both humans and monkeys during face- and voice-identification tasks, combined with site-specific manipulation of the sensory input streams into the lateral anterior temporal lobe (ATL). The ATL brings together identity-specific content from the senses but the neuronal mechanisms for this convergence are entirely unknown. My core hypothesis is that auditory voice- or visual face-identity input into key ATL convergence sites elicits a sensory-modality invariant representation, which once elicited is robust to degradation or inactivation of neuronal input from the other sense. The central aim is to test this in human patients being monitored for surgery and to directly compare and link the results with those in monkeys where the neuronal circuit and mechanisms can be revealed using optogenetic control of neuronal responses. Analyses will assess neuronal dynamics and sensory integration frameworks. This proposal is poised to unravel how the brain combines multisensory input critical for identifying individuals and cognitive operations to act upon. The basic science insights gained may inform efforts to stratify patients with different types of ATL damage.
Summary
Our social interactions and survival critically depend on identifying specific individuals to interact with or avoid (“who is that?”). Identifying individuals can be achieved by different sensory inputs, and by many accounts any sensory input elicits a representation of an individual that somehow becomes transmodal or independent of any sensory system. However, how the brain achieves transmodal integration facilitating individual recognition remains a mystery: Investigations in humans allowing direct access to site-specific neuronal processes are generally rare and have not focused on understanding neuronal multisensory integration for person recognition. Also, animal models to study the neuronal mechanisms of related processes have only recently become known. I propose to use direct recordings of neuronal activity in both humans and monkeys during face- and voice-identification tasks, combined with site-specific manipulation of the sensory input streams into the lateral anterior temporal lobe (ATL). The ATL brings together identity-specific content from the senses but the neuronal mechanisms for this convergence are entirely unknown. My core hypothesis is that auditory voice- or visual face-identity input into key ATL convergence sites elicits a sensory-modality invariant representation, which once elicited is robust to degradation or inactivation of neuronal input from the other sense. The central aim is to test this in human patients being monitored for surgery and to directly compare and link the results with those in monkeys where the neuronal circuit and mechanisms can be revealed using optogenetic control of neuronal responses. Analyses will assess neuronal dynamics and sensory integration frameworks. This proposal is poised to unravel how the brain combines multisensory input critical for identifying individuals and cognitive operations to act upon. The basic science insights gained may inform efforts to stratify patients with different types of ATL damage.
Max ERC Funding
1 995 677 €
Duration
Start date: 2017-07-01, End date: 2022-06-30
Project acronym NEUROABSTRACTION
Project Abstraction and Generalisation in Human Decision-Making
Researcher (PI) Christopher SUMMERFIELD
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Consolidator Grant (CoG), SH4, ERC-2016-COG
Summary Intelligent agents make good decisions in novel environments. Understanding how humans deal with novelty is a key problem in the cognitive and neural sciences, and building artificial agents that behave effectively with novel settings remains an unsolved challenge in machine learning. According to one view, humans form abstract representations that encode latent variables pertaining to the high-level structure of the environment (a “model” of the world). These abstractions facilitate generalisation of extant task and category information to novel domains. For example, an individual who can ride a bicycle, or speak Spanish, will learn more rapidly to ride a motorcycle, or speak Portuguese. However, the neural basis for these abstractions, and the computational underpinnings of high-level generalisation, remain largely unexplored topics in cognitive neuroscience. In the current proposal, we describe 4 experimental series in which humans learn to perform structured decision-making tasks, and then generalise this behaviour to input domains populated by previously unseen stimuli, categories, or tasks. Building on extant pilot work, we will use representational similarity analysis (RSA) of neuroimaging (fMRI or EEG) data to chart the emergence of neural representations encoding abstract structure in patterns of brain activity. We will then assess how the formation of these abstractions at the neural level predicts successful human generalisation to previously unseen contexts. Our proposal is centered around a new theory, that task generalisation depends on the formation of low-dimensional population codes in the human dorsal stream, that are scaffolded by existing neural basis functions for space, value and number. The work will have important implications for psychologists and neuroscientists interested in decision-making and executive function, and for machine learning researchers seeking to build intelligent artificial agents.
Summary
Intelligent agents make good decisions in novel environments. Understanding how humans deal with novelty is a key problem in the cognitive and neural sciences, and building artificial agents that behave effectively with novel settings remains an unsolved challenge in machine learning. According to one view, humans form abstract representations that encode latent variables pertaining to the high-level structure of the environment (a “model” of the world). These abstractions facilitate generalisation of extant task and category information to novel domains. For example, an individual who can ride a bicycle, or speak Spanish, will learn more rapidly to ride a motorcycle, or speak Portuguese. However, the neural basis for these abstractions, and the computational underpinnings of high-level generalisation, remain largely unexplored topics in cognitive neuroscience. In the current proposal, we describe 4 experimental series in which humans learn to perform structured decision-making tasks, and then generalise this behaviour to input domains populated by previously unseen stimuli, categories, or tasks. Building on extant pilot work, we will use representational similarity analysis (RSA) of neuroimaging (fMRI or EEG) data to chart the emergence of neural representations encoding abstract structure in patterns of brain activity. We will then assess how the formation of these abstractions at the neural level predicts successful human generalisation to previously unseen contexts. Our proposal is centered around a new theory, that task generalisation depends on the formation of low-dimensional population codes in the human dorsal stream, that are scaffolded by existing neural basis functions for space, value and number. The work will have important implications for psychologists and neuroscientists interested in decision-making and executive function, and for machine learning researchers seeking to build intelligent artificial agents.
Max ERC Funding
1 999 775 €
Duration
Start date: 2017-07-01, End date: 2022-06-30