Project acronym AFDMATS
Project Anton Francesco Doni – Multimedia Archive Texts and Sources
Researcher (PI) Giovanna Rizzarelli
Host Institution (HI) SCUOLA NORMALE SUPERIORE
Call Details Starting Grant (StG), SH4, ERC-2007-StG
Summary This project aims at creating a multimedia archive of the printed works of Anton Francesco Doni, who was not only an author but also a typographer, a publisher and a member of the Giolito and Marcolini’s editorial staff. The analysis of Doni’s work may be a good way to investigate appropriation, text rewriting and image reusing practices which are typical of several authors of the 16th Century, as clearly shown by the critics in the last decades. This project intends to bring to light the wide range of impulses from which Doni’s texts are generated, with a great emphasis on the figurative aspect. The encoding of these texts will be carried out using the TEI (Text Encoding Initiative) guidelines, which will enable any single text to interact with a range of intertextual references both at a local level (inside the same text) and at a macrostructural level (references to other texts by Doni or to other authors). The elements that will emerge from the textual encoding concern: A) The use of images Real images: the complex relation between Doni’s writing and the xylographies available in Marcolini’s printing-house or belonging to other collections. Mental images: the remarkable presence of verbal images, as descriptions, ekphràseis, figurative visions, dreams and iconographic allusions not accompanied by illustrations, but related to a recognizable visual repertoire or to real images that will be reproduced. B) The use of sources A parallel archive of the texts most used by Doni will be created. Digital anastatic reproductions of the 16th-Century editions known by Doni will be provided whenever available. The various forms of intertextuality will be divided into the following typologies: allusions; citations; rewritings; plagiarisms; self-quotations. Finally, the different forms of narrative (tales, short stories, anecdotes, lyrics) and the different idiomatic expressions (proverbial forms and wellerisms) will also be encoded.
Summary
This project aims at creating a multimedia archive of the printed works of Anton Francesco Doni, who was not only an author but also a typographer, a publisher and a member of the Giolito and Marcolini’s editorial staff. The analysis of Doni’s work may be a good way to investigate appropriation, text rewriting and image reusing practices which are typical of several authors of the 16th Century, as clearly shown by the critics in the last decades. This project intends to bring to light the wide range of impulses from which Doni’s texts are generated, with a great emphasis on the figurative aspect. The encoding of these texts will be carried out using the TEI (Text Encoding Initiative) guidelines, which will enable any single text to interact with a range of intertextual references both at a local level (inside the same text) and at a macrostructural level (references to other texts by Doni or to other authors). The elements that will emerge from the textual encoding concern: A) The use of images Real images: the complex relation between Doni’s writing and the xylographies available in Marcolini’s printing-house or belonging to other collections. Mental images: the remarkable presence of verbal images, as descriptions, ekphràseis, figurative visions, dreams and iconographic allusions not accompanied by illustrations, but related to a recognizable visual repertoire or to real images that will be reproduced. B) The use of sources A parallel archive of the texts most used by Doni will be created. Digital anastatic reproductions of the 16th-Century editions known by Doni will be provided whenever available. The various forms of intertextuality will be divided into the following typologies: allusions; citations; rewritings; plagiarisms; self-quotations. Finally, the different forms of narrative (tales, short stories, anecdotes, lyrics) and the different idiomatic expressions (proverbial forms and wellerisms) will also be encoded.
Max ERC Funding
559 200 €
Duration
Start date: 2008-08-01, End date: 2012-07-31
Project acronym AUTISMS
Project Decomposing Heterogeneity in Autism Spectrum Disorders
Researcher (PI) Michael LOMBARDO
Host Institution (HI) FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
Call Details Starting Grant (StG), SH4, ERC-2017-STG
Summary Autism spectrum disorders (ASD) affect 1-2% of the population and are a major public health issue. Heterogeneity between affected ASD individuals is substantial both at clinical and etiological levels, thus warranting the idea that we should begin characterizing the ASD population as multiple kinds of ‘autisms’. Without an advanced understanding of how heterogeneity manifests in ASD, it is likely that we will not make pronounced progress towards translational research goals that can have real impact on patient’s lives. This research program is focused on decomposing heterogeneity in ASD at multiple levels of analysis. Using multiple ‘big data’ resources that are both ‘broad’ (large sample size) and ‘deep’ (multiple levels of analysis measured within each individual), I will examine how known variables such as sex, early language development, early social preferences, and early intervention treatment response may be important stratification variables that differentiate ASD subgroups at phenotypic, neural systems/circuits, and genomic levels of analysis. In addition to examining known stratification variables, this research program will engage in data-driven discovery via application of advanced unsupervised computational techniques that can highlight novel multivariate distinctions in the data that signal important ASD subgroups. These data-driven approaches may hold promise for discovering novel ASD subgroups at biological and phenotypic levels of analysis that may be valuable for prioritization in future work developing personalized assessment, monitoring, and treatment strategies for subsets of the ASD population. By enhancing the precision of our understanding about multiple subtypes of ASD this work will help accelerate progress towards the ideals of personalized medicine and help to reduce the burden of ASD on individuals, families, and society.
Summary
Autism spectrum disorders (ASD) affect 1-2% of the population and are a major public health issue. Heterogeneity between affected ASD individuals is substantial both at clinical and etiological levels, thus warranting the idea that we should begin characterizing the ASD population as multiple kinds of ‘autisms’. Without an advanced understanding of how heterogeneity manifests in ASD, it is likely that we will not make pronounced progress towards translational research goals that can have real impact on patient’s lives. This research program is focused on decomposing heterogeneity in ASD at multiple levels of analysis. Using multiple ‘big data’ resources that are both ‘broad’ (large sample size) and ‘deep’ (multiple levels of analysis measured within each individual), I will examine how known variables such as sex, early language development, early social preferences, and early intervention treatment response may be important stratification variables that differentiate ASD subgroups at phenotypic, neural systems/circuits, and genomic levels of analysis. In addition to examining known stratification variables, this research program will engage in data-driven discovery via application of advanced unsupervised computational techniques that can highlight novel multivariate distinctions in the data that signal important ASD subgroups. These data-driven approaches may hold promise for discovering novel ASD subgroups at biological and phenotypic levels of analysis that may be valuable for prioritization in future work developing personalized assessment, monitoring, and treatment strategies for subsets of the ASD population. By enhancing the precision of our understanding about multiple subtypes of ASD this work will help accelerate progress towards the ideals of personalized medicine and help to reduce the burden of ASD on individuals, families, and society.
Max ERC Funding
1 499 444 €
Duration
Start date: 2018-01-01, End date: 2022-12-31
Project acronym BiT
Project How the Human Brain Masters Time
Researcher (PI) Domenica Bueti
Host Institution (HI) SCUOLA INTERNAZIONALE SUPERIORE DI STUDI AVANZATI DI TRIESTE
Call Details Consolidator Grant (CoG), SH4, ERC-2015-CoG
Summary If you suddenly hear your song on the radio and spontaneously decide to burst into dance in your living room, you need to precisely time your movements if you do not want to find yourself on your bookshelf. Most of what we do or perceive depends on how accurately we represent the temporal properties of the environment however we cannot see or touch time. As such, time in the millisecond range is both a fundamental and elusive dimension of everyday experiences. Despite the obvious importance of time to information processing and to behavior in general, little is known yet about how the human brain process time. Existing approaches to the study of the neural mechanisms of time mainly focus on the identification of brain regions involved in temporal computations (‘where’ time is processed in the brain), whereas most computational models vary in their biological plausibility and do not always make clear testable predictions. BiT is a groundbreaking research program designed to challenge current models of time perception and to offer a new perspective in the study of the neural basis of time. The groundbreaking nature of BiT derives from the novelty of the questions asked (‘when’ and ‘how’ time is processed in the brain) and from addressing them using complementary but distinct research approaches (from human neuroimaging to brain stimulation techniques, from the investigation of the whole brain to the focus on specific brain regions). By testing a new biologically plausible hypothesis of temporal representation (via duration tuning and ‘chronotopy’) and by scrutinizing the functional properties and, for the first time, the temporal hierarchies of ‘putative’ time regions, BiT will offer a multifaceted knowledge of how the human brain represents time. This new knowledge will challenge our understanding of brain organization and function that typically lacks of a time angle and will impact our understanding of how the brain uses time information for perception and action
Summary
If you suddenly hear your song on the radio and spontaneously decide to burst into dance in your living room, you need to precisely time your movements if you do not want to find yourself on your bookshelf. Most of what we do or perceive depends on how accurately we represent the temporal properties of the environment however we cannot see or touch time. As such, time in the millisecond range is both a fundamental and elusive dimension of everyday experiences. Despite the obvious importance of time to information processing and to behavior in general, little is known yet about how the human brain process time. Existing approaches to the study of the neural mechanisms of time mainly focus on the identification of brain regions involved in temporal computations (‘where’ time is processed in the brain), whereas most computational models vary in their biological plausibility and do not always make clear testable predictions. BiT is a groundbreaking research program designed to challenge current models of time perception and to offer a new perspective in the study of the neural basis of time. The groundbreaking nature of BiT derives from the novelty of the questions asked (‘when’ and ‘how’ time is processed in the brain) and from addressing them using complementary but distinct research approaches (from human neuroimaging to brain stimulation techniques, from the investigation of the whole brain to the focus on specific brain regions). By testing a new biologically plausible hypothesis of temporal representation (via duration tuning and ‘chronotopy’) and by scrutinizing the functional properties and, for the first time, the temporal hierarchies of ‘putative’ time regions, BiT will offer a multifaceted knowledge of how the human brain represents time. This new knowledge will challenge our understanding of brain organization and function that typically lacks of a time angle and will impact our understanding of how the brain uses time information for perception and action
Max ERC Funding
1 670 830 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym COGTOM
Project Cognitive tomography of mental representations
Researcher (PI) Máté Miklós LENGYEL
Host Institution (HI) KOZEP-EUROPAI EGYETEM
Call Details Consolidator Grant (CoG), SH4, ERC-2016-COG
Summary Internal models are fundamental to our understanding of how the mind constructs percepts, makes decisions, controls movements, and interacts with others. Yet, we lack principled quantitative methods to systematically estimate internal models from observable behaviour, and current approaches for discovering their mental representations remain heuristic and piecemeal. I propose to develop a set of novel 'doubly Bayesian' data analytical methods, using state-of-the-art Bayesian statistical and machine learning techniques to infer humans' internal models formalised as prior distributions in Bayesian models of cognition. This approach, cognitive tomography, takes a series of behavioural observations, each of which in itself may have very limited information content, and accumulates a detailed reconstruction of the internal model based on these observations. I also propose a set of stringent, quantifiable criteria which will be systematically applied at each step of the proposed work to rigorously assess the success of our approach. These methodological advances will allow us to track how the structured, task-general internal models that are so fundamental to humans' superior cognitive abilities, change over time as a result of decay, interference, and learning. We will apply cognitive tomography to a variety of experimental data sets, collected by our collaborators, in paradigms ranging from perceptual learning, through visual and motor structure learning, to social and concept learning. These analyses will allow us to conclusively and quantitatively test our central hypothesis that, rather than simply changing along a single 'memory strength' dimension, internal models typically change via complex and consistent patterns of transformations along multiple dimensions simultaneously. To facilitate the widespread use of our methods, we will release and support off-the-shelf usable implementations of our algorithms together with synthetic and real test data sets.
Summary
Internal models are fundamental to our understanding of how the mind constructs percepts, makes decisions, controls movements, and interacts with others. Yet, we lack principled quantitative methods to systematically estimate internal models from observable behaviour, and current approaches for discovering their mental representations remain heuristic and piecemeal. I propose to develop a set of novel 'doubly Bayesian' data analytical methods, using state-of-the-art Bayesian statistical and machine learning techniques to infer humans' internal models formalised as prior distributions in Bayesian models of cognition. This approach, cognitive tomography, takes a series of behavioural observations, each of which in itself may have very limited information content, and accumulates a detailed reconstruction of the internal model based on these observations. I also propose a set of stringent, quantifiable criteria which will be systematically applied at each step of the proposed work to rigorously assess the success of our approach. These methodological advances will allow us to track how the structured, task-general internal models that are so fundamental to humans' superior cognitive abilities, change over time as a result of decay, interference, and learning. We will apply cognitive tomography to a variety of experimental data sets, collected by our collaborators, in paradigms ranging from perceptual learning, through visual and motor structure learning, to social and concept learning. These analyses will allow us to conclusively and quantitatively test our central hypothesis that, rather than simply changing along a single 'memory strength' dimension, internal models typically change via complex and consistent patterns of transformations along multiple dimensions simultaneously. To facilitate the widespread use of our methods, we will release and support off-the-shelf usable implementations of our algorithms together with synthetic and real test data sets.
Max ERC Funding
1 179 462 €
Duration
Start date: 2017-05-01, End date: 2022-04-30
Project acronym COMPOSES
Project Compositional Operations in Semantic Space
Researcher (PI) Marco Baroni
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TRENTO
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary The ability to construct new meanings by combining words into larger constituents is one of the fundamental and peculiarly human characteristics of language. Systems that induce the meaning and combinatorial properties of linguistic symbols from data are highly desirable both from a theoretical perspective (modeling a core aspect of cognition) and for practical purposes (supporting human-computer interaction). COMPOSES tackles the meaning induction and composition problem from a new perspective that brings together corpus-based distributional semantics (that is very successful at inducing the meaning of single content words, but ignores functional elements and compositionality) and formal semantics (that focuses on functional elements and composition, but largely ignores lexical aspects of meaning and lacks methods to learn the proposed structures from data). As in distributional semantics, we represent some content words (such as nouns) by vectors recording their corpus contexts. Implementing instead ideas from formal semantics, functional elements (such as determiners) are represented by functions mapping from expressions of one type onto composite expressions of the same or other types. These composition functions are induced from corpus data by statistical learning of mappings from observed context vectors of input arguments to observed context vectors of composite structures. We model a number of compositional processes in this way, developing a coherent fragment of the semantics of English in a data-driven, large-scale fashion. Given the novelty of the approach, we also propose new evaluation frameworks: On the one hand, we take inspiration from cognitive science and experimental linguistics to design elicitation methods measuring the perceived similarity and plausibility of sentences. On the other, specialized entailment tests will assess the semantic inference properties of our corpus-induced system.
Summary
The ability to construct new meanings by combining words into larger constituents is one of the fundamental and peculiarly human characteristics of language. Systems that induce the meaning and combinatorial properties of linguistic symbols from data are highly desirable both from a theoretical perspective (modeling a core aspect of cognition) and for practical purposes (supporting human-computer interaction). COMPOSES tackles the meaning induction and composition problem from a new perspective that brings together corpus-based distributional semantics (that is very successful at inducing the meaning of single content words, but ignores functional elements and compositionality) and formal semantics (that focuses on functional elements and composition, but largely ignores lexical aspects of meaning and lacks methods to learn the proposed structures from data). As in distributional semantics, we represent some content words (such as nouns) by vectors recording their corpus contexts. Implementing instead ideas from formal semantics, functional elements (such as determiners) are represented by functions mapping from expressions of one type onto composite expressions of the same or other types. These composition functions are induced from corpus data by statistical learning of mappings from observed context vectors of input arguments to observed context vectors of composite structures. We model a number of compositional processes in this way, developing a coherent fragment of the semantics of English in a data-driven, large-scale fashion. Given the novelty of the approach, we also propose new evaluation frameworks: On the one hand, we take inspiration from cognitive science and experimental linguistics to design elicitation methods measuring the perceived similarity and plausibility of sentences. On the other, specialized entailment tests will assess the semantic inference properties of our corpus-induced system.
Max ERC Funding
1 117 636 €
Duration
Start date: 2011-11-01, End date: 2016-10-31
Project acronym ContentMAP
Project Contentotopic mapping: the topographical organization of object knowledge in the brain
Researcher (PI) Jorge ALMEIDA
Host Institution (HI) UNIVERSIDADE DE COIMBRA
Call Details Starting Grant (StG), SH4, ERC-2018-STG
Summary Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. In ContentMAP I will put forth a novel understanding of how object knowledge is organized in the brain, by proposing that this knowledge is topographically laid out in the cortical surface according to object-related dimensions that code for different types of representational content – I will call this contentotopic mapping. To study this fine-grain topography, I will use a combination of fMRI, behavioral, and neuromodulation approaches. I will first obtain patterns of neural and cognitive similarity between objects, and from these extract object-related dimensions using a dimensionality reduction technique. I will then parametrically manipulate these dimensions with an innovative use of a visual field mapping technique, and test how functional selectivity changes across the cortical surface according to an object’s score on a target dimension. Moreover, I will test the tuning function of these contentotopic maps. Finally, to mirror the complexity of implementing a high-dimensional manifold onto a 2D cortical sheet, I will aggregate the topographies for the different dimensions into a composite map, and develop an encoding model to predict neural signatures for each object. To sum up, ContentMAP will have a dramatic impact in the cognitive sciences by describing how the stuff of concepts is represented in the brain, and providing a complete description of how fine-grain representations and functional selectivity within high-level complex processes are topographically implemented.
Summary
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. In ContentMAP I will put forth a novel understanding of how object knowledge is organized in the brain, by proposing that this knowledge is topographically laid out in the cortical surface according to object-related dimensions that code for different types of representational content – I will call this contentotopic mapping. To study this fine-grain topography, I will use a combination of fMRI, behavioral, and neuromodulation approaches. I will first obtain patterns of neural and cognitive similarity between objects, and from these extract object-related dimensions using a dimensionality reduction technique. I will then parametrically manipulate these dimensions with an innovative use of a visual field mapping technique, and test how functional selectivity changes across the cortical surface according to an object’s score on a target dimension. Moreover, I will test the tuning function of these contentotopic maps. Finally, to mirror the complexity of implementing a high-dimensional manifold onto a 2D cortical sheet, I will aggregate the topographies for the different dimensions into a composite map, and develop an encoding model to predict neural signatures for each object. To sum up, ContentMAP will have a dramatic impact in the cognitive sciences by describing how the stuff of concepts is represented in the brain, and providing a complete description of how fine-grain representations and functional selectivity within high-level complex processes are topographically implemented.
Max ERC Funding
1 816 004 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym CoPeST
Project Construction of perceptual space-time
Researcher (PI) David Paul Melcher
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TRENTO
Call Details Starting Grant (StG), SH4, ERC-2012-StG_20111124
Summary The foundation of lived experience is that it occurs in a particular space and time. Objects, events and actions happen in the present moment in a unified space which surrounds our body. As noted by Immanuel Kant, space and time are a priori concepts that organize our thoughts and experiences. Yet basic laboratory experiments reveal the cracks in this illusion of a unified perceptual space-time. Our subjective experience is a construction created out of the responses of numerous sensory detectors which give only limited information. In terms of space, the sensory input from a multitude of tiny windows is organized based on the coordinates of the receptor system, such as the fingertip or a specific location on the retina. In terms of time, sensory input is summed over a limited period which varies widely across different receptor types. Critically, none of these sensory detectors has a spatial-temporal response that corresponds to our subjective experience. Nonetheless, the mind constructs an illusion of unified space and continuous time out of the variegated responses. The goal of this project is to uncover the mechanisms underlying smooth and continuous perception. This project builds on a decade of groundwork in studying specific instances of the integration of visual information over space and time with a new focus on the mechanisms that unite the various phenomena which have up to now been studied separately. A combination of behavioral, neuroimaging and computational approaches will be used to identify the mechanisms underlying spatio-temporal continuity in high-level perception. We will track the dynamic shifts between the various temporal and spatial coordinate frames used to encode information in the brain, a topic which has remained largely unexplored. This research project, driven by specific hypotheses, aims to uncover how uni-sensory, ego-centric sensory responses give rise to the rich, multisensory experience of unified space-time.
Summary
The foundation of lived experience is that it occurs in a particular space and time. Objects, events and actions happen in the present moment in a unified space which surrounds our body. As noted by Immanuel Kant, space and time are a priori concepts that organize our thoughts and experiences. Yet basic laboratory experiments reveal the cracks in this illusion of a unified perceptual space-time. Our subjective experience is a construction created out of the responses of numerous sensory detectors which give only limited information. In terms of space, the sensory input from a multitude of tiny windows is organized based on the coordinates of the receptor system, such as the fingertip or a specific location on the retina. In terms of time, sensory input is summed over a limited period which varies widely across different receptor types. Critically, none of these sensory detectors has a spatial-temporal response that corresponds to our subjective experience. Nonetheless, the mind constructs an illusion of unified space and continuous time out of the variegated responses. The goal of this project is to uncover the mechanisms underlying smooth and continuous perception. This project builds on a decade of groundwork in studying specific instances of the integration of visual information over space and time with a new focus on the mechanisms that unite the various phenomena which have up to now been studied separately. A combination of behavioral, neuroimaging and computational approaches will be used to identify the mechanisms underlying spatio-temporal continuity in high-level perception. We will track the dynamic shifts between the various temporal and spatial coordinate frames used to encode information in the brain, a topic which has remained largely unexplored. This research project, driven by specific hypotheses, aims to uncover how uni-sensory, ego-centric sensory responses give rise to the rich, multisensory experience of unified space-time.
Max ERC Funding
1 002 102 €
Duration
Start date: 2013-01-01, End date: 2017-12-31
Project acronym CRASK
Project Cortical Representation of Abstract Semantic Knowledge
Researcher (PI) Scott Laurence Fairhall
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TRENTO
Call Details Starting Grant (StG), SH4, ERC-2014-STG
Summary The study of semantic memory considers a broad range of knowledge extending from basic elemental concepts that allow us to recognise and understand objects like ‘an apple’, to elaborated semantic information such as knowing when it is appropriate to use a Wilcoxon Rank-Sum test. Such elaborated semantic knowledge is fundamental to our daily lives yet our understanding of the neural substrates is minimal. The objective of CRASK is to advance rapidly beyond the state-of-the-art to address this issue. CRASK will begin by building a fundamental understanding of regional contributions, hierarchical organisation and regional coordination to form a predictive systems model of semantic representation in the brain. This will be accomplished through convergent evidence from an innovative combination of fine cognitive manipulations, multimodal imaging techniques (fMRI, MEG), and advanced analytical approaches (multivariate analysis of response patterns, representational similarity analysis, functional connectivity). Progress will proceed in stages. First the systems-level network underlying our knowledge of other people will be determined. Once this is accomplished CRASK will investigate general semantic knowledge in terms of the relative contribution of canonical, feature-selective and category-selective semantic representations and their respective roles in automatic and effortful semantic access. The systems-level model of semantic representation will be used to predict and test how the brain manifests elaborated semantic knowledge. The resulting understanding of the neural substrates of elaborated semantic knowledge will open up new areas of research. In the final stage of CRASK we chart this territory in terms of human factors: understanding the role of the representational semantic system in transient failures in access, neural factors that lead to optimal encoding and retrieval and the effects of ageing on the system.
Summary
The study of semantic memory considers a broad range of knowledge extending from basic elemental concepts that allow us to recognise and understand objects like ‘an apple’, to elaborated semantic information such as knowing when it is appropriate to use a Wilcoxon Rank-Sum test. Such elaborated semantic knowledge is fundamental to our daily lives yet our understanding of the neural substrates is minimal. The objective of CRASK is to advance rapidly beyond the state-of-the-art to address this issue. CRASK will begin by building a fundamental understanding of regional contributions, hierarchical organisation and regional coordination to form a predictive systems model of semantic representation in the brain. This will be accomplished through convergent evidence from an innovative combination of fine cognitive manipulations, multimodal imaging techniques (fMRI, MEG), and advanced analytical approaches (multivariate analysis of response patterns, representational similarity analysis, functional connectivity). Progress will proceed in stages. First the systems-level network underlying our knowledge of other people will be determined. Once this is accomplished CRASK will investigate general semantic knowledge in terms of the relative contribution of canonical, feature-selective and category-selective semantic representations and their respective roles in automatic and effortful semantic access. The systems-level model of semantic representation will be used to predict and test how the brain manifests elaborated semantic knowledge. The resulting understanding of the neural substrates of elaborated semantic knowledge will open up new areas of research. In the final stage of CRASK we chart this territory in terms of human factors: understanding the role of the representational semantic system in transient failures in access, neural factors that lead to optimal encoding and retrieval and the effects of ageing on the system.
Max ERC Funding
1 472 502 €
Duration
Start date: 2015-05-01, End date: 2020-04-30
Project acronym ECSPLAIN
Project Early Cortical Sensory Plasticity and Adaptability in Human Adults
Researcher (PI) Maria Concetta Morrone
Host Institution (HI) UNIVERSITA DI PISA
Call Details Advanced Grant (AdG), SH4, ERC-2013-ADG
Summary Neuronal plasticity is an important mechanism for memory and cognition, and also fundamental to fine-tune perception to the environment. It has long been thought that sensory neural systems are plastic only in very young animals, during the so-called “critical period”. However, recent evidence – including work from our laboratory – suggests that the adult brain may retain far more capacity for plastic change than previously assumed, even for basic visual properties like ocular dominance. This project probes the underlying neural mechanisms of adult human plasticity, and investigates its functional role in important processes such as response optimization, auto-calibration and recovery of function. We propose a range of experiments employing many experimental techniques, organized within four principle research lines. The first (and major) research line studies the effects of brief periods of monocular deprivation on functional cortical reorganization of adults, measured by psychophysics (binocular rivalry), ERP, functional imaging and MR spectroscopy. We will also investigate the clinical implications of monocular patching of children with amblyopia. Another research line looks at the effects of longer-term deprivation, such as those induced by hereditary cone dystrophy. Another examines the interplay between plasticity and visual adaptation in early visual cortex, with techniques aimed to modulate retinotopic organization of primary visual cortex. Finally we will use fMRI to study development and plasticity in newborns, providing benchmark data to assess residual plasticity of older humans. Pilot studies have been conducted on most of the proposed lines of research (including fMRI recording from alert newborns), attesting to their feasibility and the likelihood of them being completed within the timeframe of this grant. The PI has considerable experience in all these research areas.
Summary
Neuronal plasticity is an important mechanism for memory and cognition, and also fundamental to fine-tune perception to the environment. It has long been thought that sensory neural systems are plastic only in very young animals, during the so-called “critical period”. However, recent evidence – including work from our laboratory – suggests that the adult brain may retain far more capacity for plastic change than previously assumed, even for basic visual properties like ocular dominance. This project probes the underlying neural mechanisms of adult human plasticity, and investigates its functional role in important processes such as response optimization, auto-calibration and recovery of function. We propose a range of experiments employing many experimental techniques, organized within four principle research lines. The first (and major) research line studies the effects of brief periods of monocular deprivation on functional cortical reorganization of adults, measured by psychophysics (binocular rivalry), ERP, functional imaging and MR spectroscopy. We will also investigate the clinical implications of monocular patching of children with amblyopia. Another research line looks at the effects of longer-term deprivation, such as those induced by hereditary cone dystrophy. Another examines the interplay between plasticity and visual adaptation in early visual cortex, with techniques aimed to modulate retinotopic organization of primary visual cortex. Finally we will use fMRI to study development and plasticity in newborns, providing benchmark data to assess residual plasticity of older humans. Pilot studies have been conducted on most of the proposed lines of research (including fMRI recording from alert newborns), attesting to their feasibility and the likelihood of them being completed within the timeframe of this grant. The PI has considerable experience in all these research areas.
Max ERC Funding
2 493 000 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym eHONESTY
Project Embodied Honesty in Real World and Digital Interactions
Researcher (PI) Salvatore Maria AGLIOTI
Host Institution (HI) UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA
Call Details Advanced Grant (AdG), SH4, ERC-2017-ADG
Summary Every day, everywhere, people make unethical choices ranging from minor selfish lies to massive frauds, with dramatic individual and societal costs.
Embodied cognition theories posit that even seemingly abstract processes (like grammar) may be biased by the body-related signals used for building and maintaining self-consciousness, the fundamental experience of owning a body (ownership) and being the author of an action (agency), that is at the basis of self-other distinction.
Applying this framework to morality, we hypothesize that strengthening or weakening participants’ bodily self-consciousness towards virtual avatars or real others will influence dishonesty in real, virtual, and web-based interactions.
To test this hypothesis, we will measure:
i) individual dishonesty after modifying body ownership (e.g., by changing the appearance of the virtual body) and agency (e.g., by changing the temporal synchrony between participant’s and avatar’s actions) over an avatar through which decisions are made;
ii) intergroup dishonesty after inducing inter-individual sharing of body self-consciousness (e.g., blur self-other distinction via facial visuo-tactile stimulation);
iii) individual and intergroup dishonesty by manipulating exteroceptive (e.g., the external features of a virtual body) or interoceptive (e.g., changing the degree of synchronicity between participant’s and avatar/real person’s breathing rhythm) bodily inputs.
Dishonesty will be assessed through novel ecological tasks based on virtual reality and web-based interactions. Behavioural (e.g., subjective reports, kinematics), autonomic (e.g., heartbeat, thermal imaging) and brain (e.g., EEG, TMS, lesion analyses) measures of dishonesty will be recorded in healthy and clinical populations.
Our person-based, embodied approach to dishonesty complements cross-cultural, large-scale, societal investigations and may inspire new strategies for contrasting dishonesty and other unethical behaviours.
Summary
Every day, everywhere, people make unethical choices ranging from minor selfish lies to massive frauds, with dramatic individual and societal costs.
Embodied cognition theories posit that even seemingly abstract processes (like grammar) may be biased by the body-related signals used for building and maintaining self-consciousness, the fundamental experience of owning a body (ownership) and being the author of an action (agency), that is at the basis of self-other distinction.
Applying this framework to morality, we hypothesize that strengthening or weakening participants’ bodily self-consciousness towards virtual avatars or real others will influence dishonesty in real, virtual, and web-based interactions.
To test this hypothesis, we will measure:
i) individual dishonesty after modifying body ownership (e.g., by changing the appearance of the virtual body) and agency (e.g., by changing the temporal synchrony between participant’s and avatar’s actions) over an avatar through which decisions are made;
ii) intergroup dishonesty after inducing inter-individual sharing of body self-consciousness (e.g., blur self-other distinction via facial visuo-tactile stimulation);
iii) individual and intergroup dishonesty by manipulating exteroceptive (e.g., the external features of a virtual body) or interoceptive (e.g., changing the degree of synchronicity between participant’s and avatar/real person’s breathing rhythm) bodily inputs.
Dishonesty will be assessed through novel ecological tasks based on virtual reality and web-based interactions. Behavioural (e.g., subjective reports, kinematics), autonomic (e.g., heartbeat, thermal imaging) and brain (e.g., EEG, TMS, lesion analyses) measures of dishonesty will be recorded in healthy and clinical populations.
Our person-based, embodied approach to dishonesty complements cross-cultural, large-scale, societal investigations and may inspire new strategies for contrasting dishonesty and other unethical behaviours.
Max ERC Funding
2 497 188 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym GenPercept
Project Spatio-temporal mechanisms of generative perception
Researcher (PI) David BURR
Host Institution (HI) UNIVERSITA DEGLI STUDI DI FIRENZE
Call Details Advanced Grant (AdG), SH4, ERC-2018-ADG
Summary How do we rapidly and effortlessly compute a vivid veridical representation of the external world from the noisy and ambiguous input supplied by our sensors? One possibility is that the brain does not process all incoming sensory information anew, but actively generates a model of the world from past experience, and uses current sensory data to update that model. This classic idea has been well formulised within the modern framework of Generative Bayesian Inference. However, despite these recent theoretical and empirical advances, there is no definitive proof that generative mechanisms prevail in perception, and fundamental questions remain.
The ambitious aim of GenPercept is to establish the importance of generative processes in perception, characterise quantitatively their functional role, and describe their underlying neural mechanisms. With innovative psychophysical and pupillometry techniques, it will show how past perceptual experience is exploited to manage and mould sensory analysis of the present. With ultra-high field imaging, it will identify the underlying neural mechanisms in early sensory cortex. With EEG and custom psychophysics it will show how generative predictive mechanisms mediate perceptual continuity at the time of saccadic eye movements, and explore the innovative idea that neural oscillations reflect reverberations in the propagation of generative prediction and error signals. Finally, it will look at individual differences, particularly in autistic perception, where generative mechanisms show interesting atypicalities.
A full understanding of generative processes will lead to fundamental insights in understanding how we perceive and interact with the world, and how past perceptual experience influences what we perceive. The project is also of clinical relevance, as these systems are prone to dysfunction in several neuro-behavioural conditions, including autism spectrum disorder.
Summary
How do we rapidly and effortlessly compute a vivid veridical representation of the external world from the noisy and ambiguous input supplied by our sensors? One possibility is that the brain does not process all incoming sensory information anew, but actively generates a model of the world from past experience, and uses current sensory data to update that model. This classic idea has been well formulised within the modern framework of Generative Bayesian Inference. However, despite these recent theoretical and empirical advances, there is no definitive proof that generative mechanisms prevail in perception, and fundamental questions remain.
The ambitious aim of GenPercept is to establish the importance of generative processes in perception, characterise quantitatively their functional role, and describe their underlying neural mechanisms. With innovative psychophysical and pupillometry techniques, it will show how past perceptual experience is exploited to manage and mould sensory analysis of the present. With ultra-high field imaging, it will identify the underlying neural mechanisms in early sensory cortex. With EEG and custom psychophysics it will show how generative predictive mechanisms mediate perceptual continuity at the time of saccadic eye movements, and explore the innovative idea that neural oscillations reflect reverberations in the propagation of generative prediction and error signals. Finally, it will look at individual differences, particularly in autistic perception, where generative mechanisms show interesting atypicalities.
A full understanding of generative processes will lead to fundamental insights in understanding how we perceive and interact with the world, and how past perceptual experience influences what we perceive. The project is also of clinical relevance, as these systems are prone to dysfunction in several neuro-behavioural conditions, including autism spectrum disorder.
Max ERC Funding
2 480 969 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym HANDmade
Project How natural hand usage shapes behavior and intrinsic and task-evoked brain activity.
Researcher (PI) Viviana BETTI
Host Institution (HI) UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA
Call Details Starting Grant (StG), SH4, ERC-2017-STG
Summary A seminal concept in modern neuroscience is the plasticity of the developing and adult brain that underpins the organismic ability to adapt to the ever-changing environment and internal states. Conversely, recent studies indicate that ongoing sensory input seems not crucial to modulate the overall level of brain activity, which instead it is strongly determined by its intrinsic fluctuations. These observations raise a fundamental question: what is coded in the intrinsic activity? This project tests the hypothesis that intrinsic activity represents and maintains an internal model of the environment built through the integration of information from visual and bodily inputs. The bodily inputs represent the physical and functional interaction that our body establishes with the external environment. In this framework, the hand has a special role, as it represents the primary means of interaction with the environment.
Do behavior and mental activity change as a function of the effector we use to interact with the external environment? In virtual settings, I test the resilience of the internal model to extreme manipulations of the body by replacing the hand with everyday tools. The hypothesis is that prior representations constrain novel behaviors and plastic changes of both intrinsic and task-related brain activities. This prediction is also tested on samples of acquired amputees. These subjects represent an interesting model because the hand loss might reflect loss of sensory representations and less constrain on task-related brain activation.
Throughout a combination of behavioral approaches, methods and techniques ranging from kinematics to functional neuroimaging (fMRI and MEG) and virtual reality, this project provides insights on how the synergic activity of body and environment shapes behavior and neural activity. This grant might open novel opportunities for future developments of robotic-assisted technology and neuroprostheses.
Summary
A seminal concept in modern neuroscience is the plasticity of the developing and adult brain that underpins the organismic ability to adapt to the ever-changing environment and internal states. Conversely, recent studies indicate that ongoing sensory input seems not crucial to modulate the overall level of brain activity, which instead it is strongly determined by its intrinsic fluctuations. These observations raise a fundamental question: what is coded in the intrinsic activity? This project tests the hypothesis that intrinsic activity represents and maintains an internal model of the environment built through the integration of information from visual and bodily inputs. The bodily inputs represent the physical and functional interaction that our body establishes with the external environment. In this framework, the hand has a special role, as it represents the primary means of interaction with the environment.
Do behavior and mental activity change as a function of the effector we use to interact with the external environment? In virtual settings, I test the resilience of the internal model to extreme manipulations of the body by replacing the hand with everyday tools. The hypothesis is that prior representations constrain novel behaviors and plastic changes of both intrinsic and task-related brain activities. This prediction is also tested on samples of acquired amputees. These subjects represent an interesting model because the hand loss might reflect loss of sensory representations and less constrain on task-related brain activation.
Throughout a combination of behavioral approaches, methods and techniques ranging from kinematics to functional neuroimaging (fMRI and MEG) and virtual reality, this project provides insights on how the synergic activity of body and environment shapes behavior and neural activity. This grant might open novel opportunities for future developments of robotic-assisted technology and neuroprostheses.
Max ERC Funding
1 494 662 €
Duration
Start date: 2018-02-01, End date: 2023-01-31
Project acronym I.MOVE.U
Project Intention-from-MOVEment Understanding: from moving bodies to interacting minds
Researcher (PI) Cristina Becchio
Host Institution (HI) FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
Call Details Starting Grant (StG), SH4, ERC-2012-StG_20111124
Summary "From observing other people’s movements, humans make inferences that go far beyond the appearance of the observed stimuli: inferences about unobservable mental states such as goals and intentions. Although this ability is critical for successful social interaction, little is known about how – often fast and reliably – we are able to make such inferences.
I.MOVE.U intends to provide the first comprehensive account of how intentions are extracted from body motion during interaction with conspecifics. Covert mental states such as intentions become visible to the extent they contribute as dynamic factors to generate the kinematics of a given action. By combining advanced methods in psychophysics and neuroscience with kinematics and virtual reality technologies, this project will study i) to what extent observers are sensitive to intention information conveyed by body movements; ii) what mechanisms and neural processes mediate the ability to extract intention from body motion; iii) how, during on-line social interaction with another agent, agents use their own actions to predict the partner’s intention. These issues will be addressed at different levels of analysis (motor, cognitive, neural) in neurotypical participants and participants with autism spectrum disorders. For the first time, to investigate real-time social interaction, full-body tracking will be combined with online generation of biological motion stimuli to obtain visual biological motion stimuli directly dependent on the actual behavior of participants.
I.MOVE.U pioneers a new area of research at the intersection of motor cognition and social cognition, providing knowledge of direct scientific, clinical, and technological impact. The final outcome of the project will result in a new quantitative methodology to investigate the decoding of intention during interaction with conspecifics."
Summary
"From observing other people’s movements, humans make inferences that go far beyond the appearance of the observed stimuli: inferences about unobservable mental states such as goals and intentions. Although this ability is critical for successful social interaction, little is known about how – often fast and reliably – we are able to make such inferences.
I.MOVE.U intends to provide the first comprehensive account of how intentions are extracted from body motion during interaction with conspecifics. Covert mental states such as intentions become visible to the extent they contribute as dynamic factors to generate the kinematics of a given action. By combining advanced methods in psychophysics and neuroscience with kinematics and virtual reality technologies, this project will study i) to what extent observers are sensitive to intention information conveyed by body movements; ii) what mechanisms and neural processes mediate the ability to extract intention from body motion; iii) how, during on-line social interaction with another agent, agents use their own actions to predict the partner’s intention. These issues will be addressed at different levels of analysis (motor, cognitive, neural) in neurotypical participants and participants with autism spectrum disorders. For the first time, to investigate real-time social interaction, full-body tracking will be combined with online generation of biological motion stimuli to obtain visual biological motion stimuli directly dependent on the actual behavior of participants.
I.MOVE.U pioneers a new area of research at the intersection of motor cognition and social cognition, providing knowledge of direct scientific, clinical, and technological impact. The final outcome of the project will result in a new quantitative methodology to investigate the decoding of intention during interaction with conspecifics."
Max ERC Funding
999 920 €
Duration
Start date: 2013-09-01, End date: 2018-08-31
Project acronym InStance
Project Intentional stance for social attunement
Researcher (PI) Agnieszka Anna Wykowska
Host Institution (HI) FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
Call Details Starting Grant (StG), SH4, ERC-2016-STG
Summary In daily social interactions, we constantly attribute mental states, such as beliefs or intentions, to other humans – to understand and predict their behaviour. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we will casually interact with robots. However, since we consider artificial agents to have no mental states, we tend to not attune socially with them in the sense of activating our mechanisms of social cognition. This is because it seems pointless to socially attune to something that does not carry social meaning (mental content) under the surface of an observed behaviour. INSTANCE will break new ground in social cognition research by identifying factors that influence attribution of mental states to others and social attunement with humans or artificial agents. The objectives of INSTANCE are to (1) determine parameters of others’ behaviour that make us attribute mental states to them, (2) explore parameters relevant for social attunement, (3) elucidate further factors – culture and experience – that influence attribution of mental states to agents and, thereby social attunement. INSTANCE’s objectives are highly relevant not only for fundamental research in social cognition, but also for the applied field of social robotics, where robots are expected to become humans’ social companions. Indeed, if we do not attune socially to artificial agents viewed as mindless machines, then robots may end up not working well enough in contexts where interaction is paramount. INSTANCE’s unique approach combining cognitive neuroscience methods with real-time human-robot interaction will address the challenge of social attunement between humans and artificial agents. Subtle features of robot behaviour (e.g., timing or pattern of eye movements) will be manipulated. The impact of such features on social attunement (e.g., joint attention) will be examined with behavioural, neural and physiological measures.
Summary
In daily social interactions, we constantly attribute mental states, such as beliefs or intentions, to other humans – to understand and predict their behaviour. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we will casually interact with robots. However, since we consider artificial agents to have no mental states, we tend to not attune socially with them in the sense of activating our mechanisms of social cognition. This is because it seems pointless to socially attune to something that does not carry social meaning (mental content) under the surface of an observed behaviour. INSTANCE will break new ground in social cognition research by identifying factors that influence attribution of mental states to others and social attunement with humans or artificial agents. The objectives of INSTANCE are to (1) determine parameters of others’ behaviour that make us attribute mental states to them, (2) explore parameters relevant for social attunement, (3) elucidate further factors – culture and experience – that influence attribution of mental states to agents and, thereby social attunement. INSTANCE’s objectives are highly relevant not only for fundamental research in social cognition, but also for the applied field of social robotics, where robots are expected to become humans’ social companions. Indeed, if we do not attune socially to artificial agents viewed as mindless machines, then robots may end up not working well enough in contexts where interaction is paramount. INSTANCE’s unique approach combining cognitive neuroscience methods with real-time human-robot interaction will address the challenge of social attunement between humans and artificial agents. Subtle features of robot behaviour (e.g., timing or pattern of eye movements) will be manipulated. The impact of such features on social attunement (e.g., joint attention) will be examined with behavioural, neural and physiological measures.
Max ERC Funding
1 499 937 €
Duration
Start date: 2017-05-01, End date: 2022-04-30
Project acronym JAXPERTISE
Project Joint action expertise: Behavioral, cognitive, and neural mechanisms for joint action learning
Researcher (PI) Natalie Sebanz
Host Institution (HI) KOZEP-EUROPAI EGYETEM
Call Details Consolidator Grant (CoG), SH4, ERC-2013-CoG
Summary Human life is full of joint action and our achievements are, to a large extent, joint achievements that require the coordination of two or more individuals. Piano duets and tangos, but also complex technical and medical operations rely on and exist because of coordinated actions. In recent years, research has begun to identify the basic mechanisms of joint action. This work focused on simple tasks that can be performed together without practice. However, a striking aspect of human joint action is the expertise interaction partners acquire together. How people acquire joint expertise is still poorly understood. JAXPERTISE will break new ground by identifying the behavioural, cognitive, and neural mechanisms underlying the learning of joint action. Participating in joint activities is also a motor for individual development. Although this has long been recognized, the mechanisms underlying individual learning through engagement in joint activities remain to be spelled out from a cognitive science perspective. JAXPERTISE will make this crucial step by investigating how joint action affects source memory, semantic memory, and individual skill learning. Carefully designed experiments will optimize the balance between capturing relevant interpersonal phenomena and maximizing experimental control. The proposed studies employ behavioural measures, electroencephalography, and physiological measures. Studies tracing learning processes in novices will be complemented by studies analyzing expert performance in music and dance. New approaches, such as training participants to regulate each other’s brain activity, will lead to methodological breakthroughs. JAXPERTISE will generate basic scientific knowledge that will be relevant to a large number of different disciplines in the social sciences, cognitive sciences, and humanities. The insights gained in this project will have impact on the design of robot helpers and the development of social training interventions.
Summary
Human life is full of joint action and our achievements are, to a large extent, joint achievements that require the coordination of two or more individuals. Piano duets and tangos, but also complex technical and medical operations rely on and exist because of coordinated actions. In recent years, research has begun to identify the basic mechanisms of joint action. This work focused on simple tasks that can be performed together without practice. However, a striking aspect of human joint action is the expertise interaction partners acquire together. How people acquire joint expertise is still poorly understood. JAXPERTISE will break new ground by identifying the behavioural, cognitive, and neural mechanisms underlying the learning of joint action. Participating in joint activities is also a motor for individual development. Although this has long been recognized, the mechanisms underlying individual learning through engagement in joint activities remain to be spelled out from a cognitive science perspective. JAXPERTISE will make this crucial step by investigating how joint action affects source memory, semantic memory, and individual skill learning. Carefully designed experiments will optimize the balance between capturing relevant interpersonal phenomena and maximizing experimental control. The proposed studies employ behavioural measures, electroencephalography, and physiological measures. Studies tracing learning processes in novices will be complemented by studies analyzing expert performance in music and dance. New approaches, such as training participants to regulate each other’s brain activity, will lead to methodological breakthroughs. JAXPERTISE will generate basic scientific knowledge that will be relevant to a large number of different disciplines in the social sciences, cognitive sciences, and humanities. The insights gained in this project will have impact on the design of robot helpers and the development of social training interventions.
Max ERC Funding
1 992 331 €
Duration
Start date: 2014-08-01, End date: 2019-07-31
Project acronym LEX-MEA
Project Life EXperience Modulations of Executive function Asymmetries
Researcher (PI) Antonino Vallesi
Host Institution (HI) UNIVERSITA DEGLI STUDI DI PADOVA
Call Details Starting Grant (StG), SH4, ERC-2012-StG_20111124
Summary Executive functions are a set of cognitive processes underlying goal-directed behaviour. Two crucial executive functions are criterion-setting, the ability to form new rules, and monitoring, the capacity to evaluate whether those rules are being applied correctly. They differentially engage left and right prefrontal regions. Determining the impact of experience on these key functions will help understand individual differences and, crucially, reveal the available degrees of freedom for active intervention in case of decline or deficit. The central goal of LEX-MEA proposal is to unveil which neural and experiential factors shape these high-level functions across the life-span. The specific aim of the proposal is threefold. First, by using a multimodal neuroimaging approach, it will unveil how prefrontal hemispheric asymmetries underlying executive functions change depending on the task context, and whether this division of labour is advantageous. Second, it will study how significant real-life experiences, such as practicing a skill that entails a specific executive function, modulate these functions and their neural underpinning. We will target 2 groups of professionals, simultaneous translators and air traffic controllers, who make extensive use of criterion-setting and monitoring, respectively, to test whether, in different stages of skill acquisition, they show a generalized benefit for the specific executive function trained. Third, we will test whether having practiced a skill requiring a certain executive function throughout life constitutes a compensatory factor against cognitive aging. The ultimate objective is to lay the cognitive and neural foundation for a full understanding of these extraordinary abilities not only in normal conditions but also in diverse diseases and to boost particular executive functions with tailored, theory-guided training programs.
Summary
Executive functions are a set of cognitive processes underlying goal-directed behaviour. Two crucial executive functions are criterion-setting, the ability to form new rules, and monitoring, the capacity to evaluate whether those rules are being applied correctly. They differentially engage left and right prefrontal regions. Determining the impact of experience on these key functions will help understand individual differences and, crucially, reveal the available degrees of freedom for active intervention in case of decline or deficit. The central goal of LEX-MEA proposal is to unveil which neural and experiential factors shape these high-level functions across the life-span. The specific aim of the proposal is threefold. First, by using a multimodal neuroimaging approach, it will unveil how prefrontal hemispheric asymmetries underlying executive functions change depending on the task context, and whether this division of labour is advantageous. Second, it will study how significant real-life experiences, such as practicing a skill that entails a specific executive function, modulate these functions and their neural underpinning. We will target 2 groups of professionals, simultaneous translators and air traffic controllers, who make extensive use of criterion-setting and monitoring, respectively, to test whether, in different stages of skill acquisition, they show a generalized benefit for the specific executive function trained. Third, we will test whether having practiced a skill requiring a certain executive function throughout life constitutes a compensatory factor against cognitive aging. The ultimate objective is to lay the cognitive and neural foundation for a full understanding of these extraordinary abilities not only in normal conditions but also in diverse diseases and to boost particular executive functions with tailored, theory-guided training programs.
Max ERC Funding
1 475 950 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym LIGHTUP
Project Turning the cortically blind brain to see: from neural computations to system dynamicsgenerating visual awareness in humans and monkeys
Researcher (PI) Marco TAMIETTO
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TORINO
Call Details Consolidator Grant (CoG), SH4, ERC-2017-COG
Summary Visual awareness affords flexibility and experiential richness, and its loss following brain damage has devastating effects. However, patients with blindness following cortical damage may retain visual functions, despite visual awareness is lacking (blindsight). But, how can we translate non-conscious visual abilities into conscious ones after damage to the visual cortex? To place our understanding of visual awareness on firm neurobiological and mechanistic bases, I propose to integrate human and monkey neuroscience. Next, I will translate this wisdom into evidence-based clinical intervention. First, LIGHTUP will apply computational neuroimaging methods at the micro-scale level, estimating population receptive fields in humans and monkeys. This will enable analyzing fMRI signal similar to the way tuning properties are studied in neurophysiology, and to clarify how brain areas translate visual properties into responses associated with awareness. Second, LIGHTUP leverages a behavioural paradigm that can dissociate nonconscious visual abilities from awareness in monkeys, thus offering a refined animal model of visual awareness. Applying behavioural-Dynamic Causal Modelling to combine fMRI and behavioral data, LIGHTUP will build up a Bayesian framework that specifies the directionality of information flow in the interactions across distant brain areas, and their causal role in generating visual awareness. In the third part, I will devise a rehabilitation protocol that combines brain stimulation and visual training to promote the (re)emergence of lost visual awareness. LIGHTUP will exploit non-invasive transcranial magnetic stimulation (TMS) in a novel protocol that enables stimulation of complex cortical circuits and selection of the direction of connectivity that is enhanced. This associative stimulation has been proven to induce Hebbian plasticity, and we have piloted its effects in fostering visual awareness in association with visual restoration training.
Summary
Visual awareness affords flexibility and experiential richness, and its loss following brain damage has devastating effects. However, patients with blindness following cortical damage may retain visual functions, despite visual awareness is lacking (blindsight). But, how can we translate non-conscious visual abilities into conscious ones after damage to the visual cortex? To place our understanding of visual awareness on firm neurobiological and mechanistic bases, I propose to integrate human and monkey neuroscience. Next, I will translate this wisdom into evidence-based clinical intervention. First, LIGHTUP will apply computational neuroimaging methods at the micro-scale level, estimating population receptive fields in humans and monkeys. This will enable analyzing fMRI signal similar to the way tuning properties are studied in neurophysiology, and to clarify how brain areas translate visual properties into responses associated with awareness. Second, LIGHTUP leverages a behavioural paradigm that can dissociate nonconscious visual abilities from awareness in monkeys, thus offering a refined animal model of visual awareness. Applying behavioural-Dynamic Causal Modelling to combine fMRI and behavioral data, LIGHTUP will build up a Bayesian framework that specifies the directionality of information flow in the interactions across distant brain areas, and their causal role in generating visual awareness. In the third part, I will devise a rehabilitation protocol that combines brain stimulation and visual training to promote the (re)emergence of lost visual awareness. LIGHTUP will exploit non-invasive transcranial magnetic stimulation (TMS) in a novel protocol that enables stimulation of complex cortical circuits and selection of the direction of connectivity that is enhanced. This associative stimulation has been proven to induce Hebbian plasticity, and we have piloted its effects in fostering visual awareness in association with visual restoration training.
Max ERC Funding
1 994 212 €
Duration
Start date: 2018-08-01, End date: 2023-07-31
Project acronym NEUROINT
Project How the brain codes the past to predict the future
Researcher (PI) Uri Hasson
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TRENTO
Call Details Starting Grant (StG), SH4, ERC-2010-StG_20091209
Summary The overarching objective of this research program is to use neuroimaging methods to determine how the recent past is coded in the human brain and how this coding contributes to the processing of incoming information. A central tenet of this proposal is that being able to maintain a representation of the recent past is fundamental for constructing internal predictions about future states of the environment. The construction of such has been called predictive coding, such predictions have been argued to play a fundamental role in disambiguating signal information from a noisy or degraded array.
We implement a comprehensive and multi-disciplinary research program to understand how regularities in the recent past are coded, and how they give rise to predictive codes of future states. On the basis of prior work we propose that disambiguation of signals is performed by a predictive system that relies strongly on representing the statistical properties of the recent past. This system is instantiated via interactions between three neural systems: (1) medial temporal structures including the hippocampus and parahippocampal cortex that encode statistical features of the recent past and signal whether predictions are licensed, (2) higher level cortical regions that code for detailed predictions in various modalities and generate efferent top-down predictions, and (3) lower-level sensory cortices whose activity at any given moment reflects not only bottom-up processing of sensory inputs, but also the assessment of these inputs against top-down predictions propagated from higher-levels regions. We will use neuroimaging methods with high spatial and temporal resolution (fMRI, MEG) to study neural activity in these three neural systems and the interaction between them.
Summary
The overarching objective of this research program is to use neuroimaging methods to determine how the recent past is coded in the human brain and how this coding contributes to the processing of incoming information. A central tenet of this proposal is that being able to maintain a representation of the recent past is fundamental for constructing internal predictions about future states of the environment. The construction of such has been called predictive coding, such predictions have been argued to play a fundamental role in disambiguating signal information from a noisy or degraded array.
We implement a comprehensive and multi-disciplinary research program to understand how regularities in the recent past are coded, and how they give rise to predictive codes of future states. On the basis of prior work we propose that disambiguation of signals is performed by a predictive system that relies strongly on representing the statistical properties of the recent past. This system is instantiated via interactions between three neural systems: (1) medial temporal structures including the hippocampus and parahippocampal cortex that encode statistical features of the recent past and signal whether predictions are licensed, (2) higher level cortical regions that code for detailed predictions in various modalities and generate efferent top-down predictions, and (3) lower-level sensory cortices whose activity at any given moment reflects not only bottom-up processing of sensory inputs, but also the assessment of these inputs against top-down predictions propagated from higher-levels regions. We will use neuroimaging methods with high spatial and temporal resolution (fMRI, MEG) to study neural activity in these three neural systems and the interaction between them.
Max ERC Funding
978 678 €
Duration
Start date: 2011-01-01, End date: 2015-12-31
Project acronym NOAM
Project Navigation of a mind-space. The spatial organization of declarative knowledge
Researcher (PI) Roberto Bottini
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TRENTO
Call Details Starting Grant (StG), SH4, ERC-2018-STG
Summary "Your brain is among the most complex existing systems, and it processes every second an amazing amount of data. The most amazing thing, however, is that you get to know some of it.
Declarative knowledge, meaning the portion of knowledge that we can consciously access and manipulate, is one of the most enduring mysteries of the human mind. How did it evolve? And what are the mechanisms behind it? One possibility is that the complex neural machinery that mammals evolved to navigate space has been recycled to ""navigate"" declarative knowledge. Research from single cell recordings in rodents to brain imaging studies with humans is converging toward the fascinating hypothesis that conscious declarative knowledge is spatially organized, and can be stored, retrieved and manipulated through the same computations used to represent and navigate physical space. Crucially, this spatial scaffolding may be what makes knowledge accessible to us.
The time is mature for an integral and ambitious attempt to test and develop this innovative hypothesis. NOAM will be at the frontline of this endeavour relying upon cutting-edge neuroimaging and analysis techniques. In this project we will test the relationships between spatial and conceptual navigation asking whether people that navigate space in a different way (congenitally blind individuals) also navigate concepts in a different way. Then, we will explore how low-dimensional cognitive maps interact with multidimensional semantic information, and we will test whether the spatial organization is a trademark of conscious declarative knowledge or extends to unconscious conceptual processing. Finally we will adopt a translational approach to characterize the neural basis of pre-clinical Alzheimer Disease.
Thanks to its groundbreaking nature and high-risk/high-gain approach, NOAM has the potential to ensure major progresses in cognitive neuroscience, artificial intelligence and related fields, changing the way we think about the human mind"
Summary
"Your brain is among the most complex existing systems, and it processes every second an amazing amount of data. The most amazing thing, however, is that you get to know some of it.
Declarative knowledge, meaning the portion of knowledge that we can consciously access and manipulate, is one of the most enduring mysteries of the human mind. How did it evolve? And what are the mechanisms behind it? One possibility is that the complex neural machinery that mammals evolved to navigate space has been recycled to ""navigate"" declarative knowledge. Research from single cell recordings in rodents to brain imaging studies with humans is converging toward the fascinating hypothesis that conscious declarative knowledge is spatially organized, and can be stored, retrieved and manipulated through the same computations used to represent and navigate physical space. Crucially, this spatial scaffolding may be what makes knowledge accessible to us.
The time is mature for an integral and ambitious attempt to test and develop this innovative hypothesis. NOAM will be at the frontline of this endeavour relying upon cutting-edge neuroimaging and analysis techniques. In this project we will test the relationships between spatial and conceptual navigation asking whether people that navigate space in a different way (congenitally blind individuals) also navigate concepts in a different way. Then, we will explore how low-dimensional cognitive maps interact with multidimensional semantic information, and we will test whether the spatial organization is a trademark of conscious declarative knowledge or extends to unconscious conceptual processing. Finally we will adopt a translational approach to characterize the neural basis of pre-clinical Alzheimer Disease.
Thanks to its groundbreaking nature and high-risk/high-gain approach, NOAM has the potential to ensure major progresses in cognitive neuroscience, artificial intelligence and related fields, changing the way we think about the human mind"
Max ERC Funding
1 498 644 €
Duration
Start date: 2019-04-01, End date: 2024-03-31
Project acronym Objectivity
Project Making Scientific Inferences More Objective
Researcher (PI) Jan (Michael) Sprenger
Host Institution (HI) UNIVERSITA DEGLI STUDI DI TORINO
Call Details Starting Grant (StG), SH4, ERC-2014-STG
Summary "What makes scientific inferences trustworthy? Why do we think that scientific knowledge is more than the subjective opinion of clever people at universities? When answering these questions, the notion of objectivity plays a crucial role: the label ""objective"" (1) marks an inference as unbiased and trustworthy and (2) grounds the authority of science in society. Conversely, any challenge to this image of objectivity undermines public trust in science. Sometimes these challenges consist in outright conflicts of interests, but sometimes, they are of a foundational epistemic nature. For instance, standard inference techniques in medicine and psychology have been shown to give a biased and misleading picture of reality.
My project addresses precisely those epistemic challenges and develops ways of making scientific inferences more objective. Our key move is to go beyond the traditional definition of objectivity as a ""view from nowhere"" and to calibrate the most recent philosophical accounts of objectivity (e.g., convergence of different inference methods) with the practice of scientific inference. The combination of normative and descriptive analysis is likely to break new ground in philosophy of science and beyond. In particular, we demonstrate how two salient features of scientific practice––methodological pluralism and subjective choices in inference––can be reconciled with the aim of objective knowledge.
The benefits of the proposed research are manifold. First and foremost, it will greatly enhance our understanding of the scope and limits of scientific objectivity. Second, it will improve standard forms of scientific inference, such as hypothesis testing and causal and explanatory reasoning. This will be highly useful for scientific practitioners from nearly all empirical disciplines. Third, we will apply our theoretical insights to ameliorating the design and interpretation of clinical trials, where objectivity and impartiality are sine qua non requirements."
Summary
"What makes scientific inferences trustworthy? Why do we think that scientific knowledge is more than the subjective opinion of clever people at universities? When answering these questions, the notion of objectivity plays a crucial role: the label ""objective"" (1) marks an inference as unbiased and trustworthy and (2) grounds the authority of science in society. Conversely, any challenge to this image of objectivity undermines public trust in science. Sometimes these challenges consist in outright conflicts of interests, but sometimes, they are of a foundational epistemic nature. For instance, standard inference techniques in medicine and psychology have been shown to give a biased and misleading picture of reality.
My project addresses precisely those epistemic challenges and develops ways of making scientific inferences more objective. Our key move is to go beyond the traditional definition of objectivity as a ""view from nowhere"" and to calibrate the most recent philosophical accounts of objectivity (e.g., convergence of different inference methods) with the practice of scientific inference. The combination of normative and descriptive analysis is likely to break new ground in philosophy of science and beyond. In particular, we demonstrate how two salient features of scientific practice––methodological pluralism and subjective choices in inference––can be reconciled with the aim of objective knowledge.
The benefits of the proposed research are manifold. First and foremost, it will greatly enhance our understanding of the scope and limits of scientific objectivity. Second, it will improve standard forms of scientific inference, such as hypothesis testing and causal and explanatory reasoning. This will be highly useful for scientific practitioners from nearly all empirical disciplines. Third, we will apply our theoretical insights to ameliorating the design and interpretation of clinical trials, where objectivity and impartiality are sine qua non requirements."
Max ERC Funding
1 487 928 €
Duration
Start date: 2015-09-01, End date: 2020-08-31