Project acronym EVOTONE
Project The emergence and evolution of linguistic tone
Researcher (PI) James KIRBY
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Starting Grant (StG), SH4, ERC-2017-STG
Summary This project will investigate the origins, acquisition, and evolution of linguistic tone: the use of pitch to distinguish between the meaning of words. Despite the typological ubiquity of tone, there is still no phonetic, structural, or psychological model that explains how and why tones emerge (or fail to emerge) in language after language, nor how they evolve once they are formed. This is because there has never been a systematic analysis of the principles that govern the evolution of tone systems. EVOTONE will provide the first comprehensive study of tonal emergence and evolution, combining detailed phonetic and perceptual studies of Himalayan and Southeast Asian minority languages with innovative experimental methodologies and large-scale computational analysis of the structural principles correlated with the emergence of tone.
EVOTONE is guided by a novel hypothesis that, if correct, will have important repercussions for the study of sound change. The core idea is deceptively simple: rather than being the result of small, incremental changes in pronunciation, features like tone come about due to a sudden failure to articulate a particular aspect of a sound. If the risk of focusing on tone is to overemphasize a single feature, the potential reward is enormous: an opportunity to transform our understanding of how physical and cognitive pressures interact to shape human behavior and language change. The outcomes of this project will provide a new empirical foundation for the typology and evolution of tone systems; break new ground in the study of how structural and phonetic factors interact in sound change; and establish, for the first time, an empirically grounded set of principles of tonal evolution. In addition to resolving a number of outstanding questions about tonogenesis, the results will substantially advance our more general understanding of how language changes over time.
Summary
This project will investigate the origins, acquisition, and evolution of linguistic tone: the use of pitch to distinguish between the meaning of words. Despite the typological ubiquity of tone, there is still no phonetic, structural, or psychological model that explains how and why tones emerge (or fail to emerge) in language after language, nor how they evolve once they are formed. This is because there has never been a systematic analysis of the principles that govern the evolution of tone systems. EVOTONE will provide the first comprehensive study of tonal emergence and evolution, combining detailed phonetic and perceptual studies of Himalayan and Southeast Asian minority languages with innovative experimental methodologies and large-scale computational analysis of the structural principles correlated with the emergence of tone.
EVOTONE is guided by a novel hypothesis that, if correct, will have important repercussions for the study of sound change. The core idea is deceptively simple: rather than being the result of small, incremental changes in pronunciation, features like tone come about due to a sudden failure to articulate a particular aspect of a sound. If the risk of focusing on tone is to overemphasize a single feature, the potential reward is enormous: an opportunity to transform our understanding of how physical and cognitive pressures interact to shape human behavior and language change. The outcomes of this project will provide a new empirical foundation for the typology and evolution of tone systems; break new ground in the study of how structural and phonetic factors interact in sound change; and establish, for the first time, an empirically grounded set of principles of tonal evolution. In addition to resolving a number of outstanding questions about tonogenesis, the results will substantially advance our more general understanding of how language changes over time.
Max ERC Funding
1 481 154 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym EXPC
Project Challenging and extending predictive coding as an account of brain function
Researcher (PI) Thomas FITZGERALD
Host Institution (HI) UNIVERSITY OF EAST ANGLIA
Call Details Starting Grant (StG), SH4, ERC-2018-STG
Summary Probabilistic models of brain function, which propose that the brain can be understood as implementing the principles of optimal statistical inference, have become extremely influential in recent years. Predictive coding is perhaps the most widely held, and best supported, of these models, particularly within cognitive neuroscience. However, current models of predictive coding and its neuronal substrates are still relatively simple, and do not explain how humans solve a number of fundamental problems. This limits their power to explain brain function. I propose a series of experiments designed to test how human subjects address a number of these core problems. I will use behavioural and neuroimaging data to develop and test extensions to current models (or, if necessary, provide an alternative framework for understanding brain function). The purpose of this is two-fold, to give insight into the computations that underlie cognitive function, and to provide understanding of the neurobiological processes that support those computations. The project will thus constitute a stepping stone towards developing a mechanistic model of how the brain implements cognition.
Summary
Probabilistic models of brain function, which propose that the brain can be understood as implementing the principles of optimal statistical inference, have become extremely influential in recent years. Predictive coding is perhaps the most widely held, and best supported, of these models, particularly within cognitive neuroscience. However, current models of predictive coding and its neuronal substrates are still relatively simple, and do not explain how humans solve a number of fundamental problems. This limits their power to explain brain function. I propose a series of experiments designed to test how human subjects address a number of these core problems. I will use behavioural and neuroimaging data to develop and test extensions to current models (or, if necessary, provide an alternative framework for understanding brain function). The purpose of this is two-fold, to give insight into the computations that underlie cognitive function, and to provide understanding of the neurobiological processes that support those computations. The project will thus constitute a stepping stone towards developing a mechanistic model of how the brain implements cognition.
Max ERC Funding
1 464 713 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym FACEVAR
Project Face Recognition: Understanding the role of within-person variability
Researcher (PI) Anthony Michael Burton
Host Institution (HI) UNIVERSITY OF YORK
Call Details Advanced Grant (AdG), SH4, ERC-2012-ADG_20120411
Summary This project represents a new way to look at the problem of human face recognition. Despite a large amount of research on this topic, we still do not understand the most fundamental aspect of face processing: how can we identify the people we see? This is a key problem in human perception, but it also has practical implications in forensic and security settings. This project has its roots in a simple observation: pictures of the same face can look very different indeed. In the standard approach to face recognition, this commonplace fact is treated as an inconvenience. Differences between pictures of the same person are regarded as ‘noise’, and either ignored, or eliminated by systematically controlling the images used for research. This research programme takes exactly the converse approach. Instead of trying to control away this variability, we wish to study it explicitly. Under this approach, the focus is not how to ‘tell people apart’, but instead how to ‘tell people together’ – how to bring together superficially different images into a coherent representation. Early work suggests that a very important component of familiar face recognition is the ability to generalize over superficial image differences – differences which tend to fool unfamiliar viewers, as well as automatic computer-based systems. The current failure to address this variability may account for the slow progress in face identification – progress which has fallen behind the understanding of other aspects of face processing such as social perception. By studying this missing component of face recognition, a novel theoretical model will be developed which has the potential to make a significant contribution.
Summary
This project represents a new way to look at the problem of human face recognition. Despite a large amount of research on this topic, we still do not understand the most fundamental aspect of face processing: how can we identify the people we see? This is a key problem in human perception, but it also has practical implications in forensic and security settings. This project has its roots in a simple observation: pictures of the same face can look very different indeed. In the standard approach to face recognition, this commonplace fact is treated as an inconvenience. Differences between pictures of the same person are regarded as ‘noise’, and either ignored, or eliminated by systematically controlling the images used for research. This research programme takes exactly the converse approach. Instead of trying to control away this variability, we wish to study it explicitly. Under this approach, the focus is not how to ‘tell people apart’, but instead how to ‘tell people together’ – how to bring together superficially different images into a coherent representation. Early work suggests that a very important component of familiar face recognition is the ability to generalize over superficial image differences – differences which tend to fool unfamiliar viewers, as well as automatic computer-based systems. The current failure to address this variability may account for the slow progress in face identification – progress which has fallen behind the understanding of other aspects of face processing such as social perception. By studying this missing component of face recognition, a novel theoretical model will be developed which has the potential to make a significant contribution.
Max ERC Funding
1 496 263 €
Duration
Start date: 2013-06-01, End date: 2017-05-31
Project acronym FEEL
Project "A new approach to understanding consciousness: how ""feel"" arises in humans and (possibly) robots."
Researcher (PI) John Kevin O'regan
Host Institution (HI) UNIVERSITE PARIS DESCARTES
Call Details Advanced Grant (AdG), SH4, ERC-2012-ADG_20120411
Summary "Philosophers divide the problem of consciousness into two parts: An “easy” part, which involves explaining how one can become aware of something in the sense of being able to make use of it in one's rational behavior. And a “hard” part, which involves explaining why certain types of brain activity should actually give rise to feels: for example the feel of ""red"" or of ""onion flavor"". The ""hard"" part is considered hard because there seems logically no way physical mechanisms in the brain could generate such experiences.
The sensorimotor theory (O’Regan, 2011) has an answer to the ""hard"" problem. The idea is that feel is a way of interacting with the environment. The laws describing such interactions, called sensorimotor contingencies, determine the quality of how a feel is experienced. For example, they determine whether someone experiences a feel as being real or imagined, as being visual or tactile, and how a feel compares to other feels. The sensorimotor theory provides a unifying framework for an understanding of consciousness, but it needs a firmer conceptual and mathematical basis and additional scientific testing.
To do this, a first, theoretical goal of the FEEL project is to provide a mathematical basis for the concept of sensorimotor contingency, and to clarify and consolidate its conceptual foundations.
A second goal is to empirically test scientific implications of the theory in specific, promising areas: namely, color psychophysics, sensory substitution, child development and developmental robotics.
The expected outcome is a fully-fledged theory of feel, from elementary feels like ""red"" to more abstract feels like the feel of sensory modalities, the notions of body and object. Applications are anticipated in color science, the design of sensory prostheses, improving the ""presence"" of virtual reality and gaming, and in understanding how infants and possibly robots come to have sensory experiences."
Summary
"Philosophers divide the problem of consciousness into two parts: An “easy” part, which involves explaining how one can become aware of something in the sense of being able to make use of it in one's rational behavior. And a “hard” part, which involves explaining why certain types of brain activity should actually give rise to feels: for example the feel of ""red"" or of ""onion flavor"". The ""hard"" part is considered hard because there seems logically no way physical mechanisms in the brain could generate such experiences.
The sensorimotor theory (O’Regan, 2011) has an answer to the ""hard"" problem. The idea is that feel is a way of interacting with the environment. The laws describing such interactions, called sensorimotor contingencies, determine the quality of how a feel is experienced. For example, they determine whether someone experiences a feel as being real or imagined, as being visual or tactile, and how a feel compares to other feels. The sensorimotor theory provides a unifying framework for an understanding of consciousness, but it needs a firmer conceptual and mathematical basis and additional scientific testing.
To do this, a first, theoretical goal of the FEEL project is to provide a mathematical basis for the concept of sensorimotor contingency, and to clarify and consolidate its conceptual foundations.
A second goal is to empirically test scientific implications of the theory in specific, promising areas: namely, color psychophysics, sensory substitution, child development and developmental robotics.
The expected outcome is a fully-fledged theory of feel, from elementary feels like ""red"" to more abstract feels like the feel of sensory modalities, the notions of body and object. Applications are anticipated in color science, the design of sensory prostheses, improving the ""presence"" of virtual reality and gaming, and in understanding how infants and possibly robots come to have sensory experiences."
Max ERC Funding
2 498 340 €
Duration
Start date: 2013-06-01, End date: 2018-05-31
Project acronym FLEXSEM
Project Graded constraints in semantic cognition: How do we retrieve knowledge in a flexible way?
Researcher (PI) Elizabeth Alice JEFFERIES
Host Institution (HI) UNIVERSITY OF YORK
Call Details Consolidator Grant (CoG), SH4, ERC-2017-COG
Summary For any concept, we have knowledge about diverse features – for example, a dog is furry, can chase rabbits, and is “man’s best friend”. How, at a specific moment, do we flexibly retrieve relevant conceptual knowledge that suits our current goals and context? We can promote coherence between weakly-related aspects of knowledge as required, and also achieve the timely release from patterns of retrieval when the situation changes. These effects are likely to play a central role in our mental lives – yet they are poorly understood because past research has largely focused on how the conceptual store captures what is generally true across experiences (i.e. semantic representation). This project alternatively examines the cognitive and brain mechanisms that promote currently-relevant semantic information. We consider whether flexible semantic retrieval involves the recruitment of additional brain regions, organised within large-scale distributed networks, that place constraints on patterns of retrieval in the semantic store. In this way, semantic flexibility might relate to the evolving interaction between distinct brain networks. We examine whether specific brain regions support distinct cognitive processes (e.g., “automatic retrieval”; “selection”) or, alternatively, whether the functional organisation of these networks is non-arbitrary, with brain regions further away from the semantic store supporting retrieval when there is a greater mismatch between ongoing retrieval and the pattern required by the context. We test this “graded constraints” hypothesis by combining parametric manipulations of the need for constraint with convergent neuroscientific methods that characterise functional recruitment in space (magnetic resonance imaging) and time (magnetoencephalography). We investigate causality (neuropsychology; brain stimulation) and the broader implications of our account (using an individual differences approach).
Summary
For any concept, we have knowledge about diverse features – for example, a dog is furry, can chase rabbits, and is “man’s best friend”. How, at a specific moment, do we flexibly retrieve relevant conceptual knowledge that suits our current goals and context? We can promote coherence between weakly-related aspects of knowledge as required, and also achieve the timely release from patterns of retrieval when the situation changes. These effects are likely to play a central role in our mental lives – yet they are poorly understood because past research has largely focused on how the conceptual store captures what is generally true across experiences (i.e. semantic representation). This project alternatively examines the cognitive and brain mechanisms that promote currently-relevant semantic information. We consider whether flexible semantic retrieval involves the recruitment of additional brain regions, organised within large-scale distributed networks, that place constraints on patterns of retrieval in the semantic store. In this way, semantic flexibility might relate to the evolving interaction between distinct brain networks. We examine whether specific brain regions support distinct cognitive processes (e.g., “automatic retrieval”; “selection”) or, alternatively, whether the functional organisation of these networks is non-arbitrary, with brain regions further away from the semantic store supporting retrieval when there is a greater mismatch between ongoing retrieval and the pattern required by the context. We test this “graded constraints” hypothesis by combining parametric manipulations of the need for constraint with convergent neuroscientific methods that characterise functional recruitment in space (magnetic resonance imaging) and time (magnetoencephalography). We investigate causality (neuropsychology; brain stimulation) and the broader implications of our account (using an individual differences approach).
Max ERC Funding
1 999 860 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym FraMEPhys
Project A Framework for Metaphysical Explanation in Physics
Researcher (PI) Alastair WILSON
Host Institution (HI) THE UNIVERSITY OF BIRMINGHAM
Call Details Starting Grant (StG), SH4, ERC-2017-STG
Summary There is a growing consensus that causal explanation is not the whole story about explanation in science. Metaphysics has seen intense recent attention to the notion of grounding; in philosophy of physics, the focus has been on mathematical and structural explanation. But the grounding debate has been criticized for insularity and disconnection from scientific practice, while work on explanation in physics tends to overlook the sophisticated logical systems and conceptual distinctions developed in metaphysics. This situation hinders understanding of novel explanatory scenarios n philosophy of physics, where familiar models of causal explanation seem to break down. FraMEPhys addresses these challenges by combining new conceptual innovations and insights from both metaphysics and philosophy of physics to transform our understanding of the nature of explanation.
FraMEPhys will engage systematically with the best work on explanation within metaphysics and philosophy of science to develop a new general framework for understanding metaphysical explanation in physics, based around the structural-equations approach to causation. The guiding idea is that the conceptual and methodological tools of structural-equations modelling can be extended beyond their familiar application to causal explanation. This promising strategy, based on ground-breaking recent work by the PI, will be applied in FraMEPhys to model the explanatory structures involved in three case studies from philosophy of physics: geometrical explanations of inertial and gravitational motion, explanation in the presence of closed time-like curves, and the explanatory connection between entangled quantum systems. FraMEPhys will develop new concepts for understanding the varieties of explanation, will provide a uniquely systematic treatment of some key cases in philosophy of physics, and will push forward fruitful interactions at the intersection of metaphysics, philosophy of science and philosophy of physics.
Summary
There is a growing consensus that causal explanation is not the whole story about explanation in science. Metaphysics has seen intense recent attention to the notion of grounding; in philosophy of physics, the focus has been on mathematical and structural explanation. But the grounding debate has been criticized for insularity and disconnection from scientific practice, while work on explanation in physics tends to overlook the sophisticated logical systems and conceptual distinctions developed in metaphysics. This situation hinders understanding of novel explanatory scenarios n philosophy of physics, where familiar models of causal explanation seem to break down. FraMEPhys addresses these challenges by combining new conceptual innovations and insights from both metaphysics and philosophy of physics to transform our understanding of the nature of explanation.
FraMEPhys will engage systematically with the best work on explanation within metaphysics and philosophy of science to develop a new general framework for understanding metaphysical explanation in physics, based around the structural-equations approach to causation. The guiding idea is that the conceptual and methodological tools of structural-equations modelling can be extended beyond their familiar application to causal explanation. This promising strategy, based on ground-breaking recent work by the PI, will be applied in FraMEPhys to model the explanatory structures involved in three case studies from philosophy of physics: geometrical explanations of inertial and gravitational motion, explanation in the presence of closed time-like curves, and the explanatory connection between entangled quantum systems. FraMEPhys will develop new concepts for understanding the varieties of explanation, will provide a uniquely systematic treatment of some key cases in philosophy of physics, and will push forward fruitful interactions at the intersection of metaphysics, philosophy of science and philosophy of physics.
Max ERC Funding
1 481 184 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym FREEMIND
Project FREE the MIND: the neurocognitive determinants of intentional decision
Researcher (PI) Jiaxiang ZHANG
Host Institution (HI) CARDIFF UNIVERSITY
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary Acting based on intention is a fundamental ability to our lives. Apple or orange, cash or card: we constantly make intentional decisions to fulfil our desires, even when the options have no explicit difference in their rewards. Recently, I and others have offered the first evidence to support that intentional decision and externally guided decision share similar computational principles. However, how the brain implements these principles for intentional decision remains unknown.
This project aims to establish a multilevel understanding of intentional decision, spanning from neurons to brain networks to behaviour, through a powerful combination of novel paradigms, cutting-edge brain imaging, and innovative methods. Central to my approach is formal computational modelling, allowing me to establish a quantitative link between data and theory at multiple levels of abstraction. Subproject 1 will ask which brain regions encode intentional information, when intentional processes occur, and how neurochemical concentration influences intentional decision. Subproject 2 will focus on theoretically predicted changes in intentional decision under behavioural and neural interventions. I will use brain imaging and brain stimulation to test the flexibility of intentional decision within individuals. Subproject 3 will launch the largest study to date on intentional decision. I will characterize individual differences in intentional decision from 2,000 representative samples. I will then investigate, with high statistical power, the contributions of neurochemistry and brain microstructure to individual differences in intentional decision. This project premises to establish the first neurobiological theory of intentional behaviour, and provide mechanistic understanding of its changes within and between individuals. The new theory and innovative methodology will open further research possibilities to explore intentional deficits in diseases, and the neural basis of human volition.
Summary
Acting based on intention is a fundamental ability to our lives. Apple or orange, cash or card: we constantly make intentional decisions to fulfil our desires, even when the options have no explicit difference in their rewards. Recently, I and others have offered the first evidence to support that intentional decision and externally guided decision share similar computational principles. However, how the brain implements these principles for intentional decision remains unknown.
This project aims to establish a multilevel understanding of intentional decision, spanning from neurons to brain networks to behaviour, through a powerful combination of novel paradigms, cutting-edge brain imaging, and innovative methods. Central to my approach is formal computational modelling, allowing me to establish a quantitative link between data and theory at multiple levels of abstraction. Subproject 1 will ask which brain regions encode intentional information, when intentional processes occur, and how neurochemical concentration influences intentional decision. Subproject 2 will focus on theoretically predicted changes in intentional decision under behavioural and neural interventions. I will use brain imaging and brain stimulation to test the flexibility of intentional decision within individuals. Subproject 3 will launch the largest study to date on intentional decision. I will characterize individual differences in intentional decision from 2,000 representative samples. I will then investigate, with high statistical power, the contributions of neurochemistry and brain microstructure to individual differences in intentional decision. This project premises to establish the first neurobiological theory of intentional behaviour, and provide mechanistic understanding of its changes within and between individuals. The new theory and innovative methodology will open further research possibilities to explore intentional deficits in diseases, and the neural basis of human volition.
Max ERC Funding
1 487 908 €
Duration
Start date: 2017-03-01, End date: 2022-02-28
Project acronym FRONTSEM
Project New Frontiers of Formal Semantics
Researcher (PI) Philippe David Schlenker
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Advanced Grant (AdG), SH4, ERC-2012-ADG_20120411
Summary "Despite considerable successes in the last 40 years, formal semantics has not quite established itself as a field of great relevance to the broader enterprise of cognitive and social science. Besides the unavoidable technicality of formal semantic theories, there might be two substantive reasons. First, the lingua franca of cognitive science is the issue of the modular decomposition of the mind – but formal semantics has partly moved away from it: the sophisticated logical models of meaning in current use typically lump together all aspects of meaning in a big 'semantics-cum-pragmatics'. Second, formal semantics has remained somewhat parochial: it almost never crosses the frontiers of spoken language - despite the fact that questions of obvious interest arise in sign language; and it rarely addresses the relation between linguistic meaning and other cognitive systems, be it in humans or in related species. While strictly adhering to the formal methodology of contemporary semantics, we will seek to expand the frontiers of the field, with one leading question: what is the modular organization of meaning?
(i) First, we will help establish a new subfield of sign language formal semantics, with an initial focus on anaphora; we will ask whether the interaction between an abstract anaphoric module and the special geometric properties of sign language can account for the similarities and differences between sign and spoken language pronouns.
(ii) Second, we will revisit issues of modular decomposition between semantics and pragmatics by trying to disentangle modules that have been lumped together in recent semantic theorizing, in particular in the domains of presupposition, anaphora and conventional implicatures.
(iii) Third, we will ask whether some semantic modules might have analogues in other cognitive systems by investigating (a) possible precursors of semantics in primate vocalizations, and (b) possible applications of focus in music."
Summary
"Despite considerable successes in the last 40 years, formal semantics has not quite established itself as a field of great relevance to the broader enterprise of cognitive and social science. Besides the unavoidable technicality of formal semantic theories, there might be two substantive reasons. First, the lingua franca of cognitive science is the issue of the modular decomposition of the mind – but formal semantics has partly moved away from it: the sophisticated logical models of meaning in current use typically lump together all aspects of meaning in a big 'semantics-cum-pragmatics'. Second, formal semantics has remained somewhat parochial: it almost never crosses the frontiers of spoken language - despite the fact that questions of obvious interest arise in sign language; and it rarely addresses the relation between linguistic meaning and other cognitive systems, be it in humans or in related species. While strictly adhering to the formal methodology of contemporary semantics, we will seek to expand the frontiers of the field, with one leading question: what is the modular organization of meaning?
(i) First, we will help establish a new subfield of sign language formal semantics, with an initial focus on anaphora; we will ask whether the interaction between an abstract anaphoric module and the special geometric properties of sign language can account for the similarities and differences between sign and spoken language pronouns.
(ii) Second, we will revisit issues of modular decomposition between semantics and pragmatics by trying to disentangle modules that have been lumped together in recent semantic theorizing, in particular in the domains of presupposition, anaphora and conventional implicatures.
(iii) Third, we will ask whether some semantic modules might have analogues in other cognitive systems by investigating (a) possible precursors of semantics in primate vocalizations, and (b) possible applications of focus in music."
Max ERC Funding
2 490 488 €
Duration
Start date: 2013-05-01, End date: 2018-04-30
Project acronym GESTIMAGE
Project Gestures in nonhuman and human primates, a landmark of language in the brain? Searching for the origins of brain specialization for language
Researcher (PI) Adrien Ludwig Ohannes MEGUERDITCHIAN
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary Most of language functions are under the left brain control in both left- and right-handers and involve structural asymmetries between the two hemispheres. While this asymmetry was considered as associated with handedness, such a relation has been recently questioned. Considering the strong language/gesture links in humans and the continuities between the gestural system in apes and monkeys and some language properties, we recently suggested the hypothesis of a continuity between language lateralization and asymmetry of communicative gestures in both human and nonhuman primates. Given the phylogenetical proximity between those species, comparative research on brain specialization between a non-linguistic gestural system (i.e., in monkeys) versus a linguistic gestural systems in humans (i.e., sign language in deaf) might help evaluating the gestural continuities with language lateralization in term of manual asymmetries, structural and functional lateralization of the brain.
To this purpose, a first objective is to evaluate the continuities of manual and brain asymmetries between (1) a linguistic gestural system in humans using MRI in 100 adult native deaf French signers, and (2) a non-linguistic gestural system of adult baboons Papio anubis using 106 MRI brain images.
A second objective is to explore the functional brain lateralization of gestures production in baboons (versus manipulation) using non-invasive wireless Infrared Spectroscopy in 8 trained subjects within interactions with humans.
A last innovative objective is to investigate, through the first non-invasive longitudinal MRI study conducted from birth to sexual maturity in primates, the development and heritability of brain structural asymmetries and their correlates with gesture asymmetries in 30 baboons.
At both evolutionary and developmental levels, the project will thus ultimately contribute to enhance our understanding on the role of gestures in the origins of brain specialization for language.
Summary
Most of language functions are under the left brain control in both left- and right-handers and involve structural asymmetries between the two hemispheres. While this asymmetry was considered as associated with handedness, such a relation has been recently questioned. Considering the strong language/gesture links in humans and the continuities between the gestural system in apes and monkeys and some language properties, we recently suggested the hypothesis of a continuity between language lateralization and asymmetry of communicative gestures in both human and nonhuman primates. Given the phylogenetical proximity between those species, comparative research on brain specialization between a non-linguistic gestural system (i.e., in monkeys) versus a linguistic gestural systems in humans (i.e., sign language in deaf) might help evaluating the gestural continuities with language lateralization in term of manual asymmetries, structural and functional lateralization of the brain.
To this purpose, a first objective is to evaluate the continuities of manual and brain asymmetries between (1) a linguistic gestural system in humans using MRI in 100 adult native deaf French signers, and (2) a non-linguistic gestural system of adult baboons Papio anubis using 106 MRI brain images.
A second objective is to explore the functional brain lateralization of gestures production in baboons (versus manipulation) using non-invasive wireless Infrared Spectroscopy in 8 trained subjects within interactions with humans.
A last innovative objective is to investigate, through the first non-invasive longitudinal MRI study conducted from birth to sexual maturity in primates, the development and heritability of brain structural asymmetries and their correlates with gesture asymmetries in 30 baboons.
At both evolutionary and developmental levels, the project will thus ultimately contribute to enhance our understanding on the role of gestures in the origins of brain specialization for language.
Max ERC Funding
1 499 192 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym GESTURALORIGINS
Project Gestural Origins: Linguistic Features of pan-African Ape Communication
Researcher (PI) Catherine HOBAITER
Host Institution (HI) THE UNIVERSITY COURT OF THE UNIVERSITY OF ST ANDREWS
Call Details Starting Grant (StG), SH4, ERC-2018-STG
Summary Understanding the origins of language speaks to the fundamental question of what it means to be human. Other species’ communication contains rich information exchange; but humans do more than broadcast information. Language is used to communicate goals to partners, it goes beyond information: it has meaning. Only great ape gestures show similarly systematic meaningful communication; they are essential to understanding how human language evolved.
Beyond meaning, two core features of human language are social learning and syntactic structure. These are universals, present across cultures. We all learn words and how to use them from others, leading to languages and dialects. We all use syntax; expressing different meanings by recombining words. In fact, these two features are common in animal communication: sperm whales learn songs from others; finches re-order notes into different songs. But, in a significant evolutionary puzzle, both appear absent in the communication of our closest relatives.
The discovery of meanings in ape gesture resulted from studying ape communication under the challenging natural conditions that allow its full expression. A single study of a single group: it was the tip of the iceberg. Employing pan-African data across 17 ape and 9 human groups. I will tackle three major objectives. (1) Is there cultural variation in ape gesture? We will look at how species, physical environment, and social interaction affect how apes acquire and use gestures. (2) When apes combine signals, does it change their meaning? Moving beyond sequential structure we will look at how apes combine signals to construct meaning, and how the speed, size, and timing of gestures impacts meaning. (3) Human-ape gesture. We will investigate adults’ and children’s use and comprehension of gestures to compare them directly to other apes. Using new and established techniques across a dramatically wider sample I will address the fundamental question of how human language evolved.
Summary
Understanding the origins of language speaks to the fundamental question of what it means to be human. Other species’ communication contains rich information exchange; but humans do more than broadcast information. Language is used to communicate goals to partners, it goes beyond information: it has meaning. Only great ape gestures show similarly systematic meaningful communication; they are essential to understanding how human language evolved.
Beyond meaning, two core features of human language are social learning and syntactic structure. These are universals, present across cultures. We all learn words and how to use them from others, leading to languages and dialects. We all use syntax; expressing different meanings by recombining words. In fact, these two features are common in animal communication: sperm whales learn songs from others; finches re-order notes into different songs. But, in a significant evolutionary puzzle, both appear absent in the communication of our closest relatives.
The discovery of meanings in ape gesture resulted from studying ape communication under the challenging natural conditions that allow its full expression. A single study of a single group: it was the tip of the iceberg. Employing pan-African data across 17 ape and 9 human groups. I will tackle three major objectives. (1) Is there cultural variation in ape gesture? We will look at how species, physical environment, and social interaction affect how apes acquire and use gestures. (2) When apes combine signals, does it change their meaning? Moving beyond sequential structure we will look at how apes combine signals to construct meaning, and how the speed, size, and timing of gestures impacts meaning. (3) Human-ape gesture. We will investigate adults’ and children’s use and comprehension of gestures to compare them directly to other apes. Using new and established techniques across a dramatically wider sample I will address the fundamental question of how human language evolved.
Max ERC Funding
1 500 000 €
Duration
Start date: 2019-03-01, End date: 2024-02-29