Project acronym A-DATADRIVE-B
Project Advanced Data-Driven Black-box modelling
Researcher (PI) Johan Adelia K Suykens
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Advanced Grant (AdG), PE7, ERC-2011-ADG_20110209
Summary Making accurate predictions is a crucial factor in many systems (such as in modelling energy consumption, power load forecasting, traffic networks, process industry, environmental modelling, biomedicine, brain-machine interfaces) for cost savings, efficiency, health, safety and organizational purposes. In this proposal we aim at realizing a new generation of more advanced black-box modelling techniques for estimating predictive models from measured data. We will study different optimization modelling frameworks in order to obtain improved black-box modelling approaches. This will be done by specifying models through constrained optimization problems by studying different candidate core models (parametric models, support vector machines and kernel methods) together with additional sets of constraints and regularization mechanisms. Different candidate mathematical frameworks will be considered with models that possess primal and (Lagrange) dual model representations, functional analysis in reproducing kernel Hilbert spaces, operator splitting and optimization in Banach spaces. Several aspects that are relevant to black-box models will be studied including incorporation of prior knowledge, structured dynamical systems, tensorial data representations, interpretability and sparsity, and general purpose optimization algorithms. The methods should be suitable for handling larger data sets and high dimensional input spaces. The final goal is also to realize a next generation software tool (including symbolic generation of models and handling different supervised and unsupervised learning tasks, static and dynamic systems) that can be generically applied to data from different application areas. The proposal A-DATADRIVE-B aims at getting end-users connected to the more advanced methods through a user-friendly data-driven black-box modelling tool. The methods and tool will be tested in connection to several real-life applications.
Summary
Making accurate predictions is a crucial factor in many systems (such as in modelling energy consumption, power load forecasting, traffic networks, process industry, environmental modelling, biomedicine, brain-machine interfaces) for cost savings, efficiency, health, safety and organizational purposes. In this proposal we aim at realizing a new generation of more advanced black-box modelling techniques for estimating predictive models from measured data. We will study different optimization modelling frameworks in order to obtain improved black-box modelling approaches. This will be done by specifying models through constrained optimization problems by studying different candidate core models (parametric models, support vector machines and kernel methods) together with additional sets of constraints and regularization mechanisms. Different candidate mathematical frameworks will be considered with models that possess primal and (Lagrange) dual model representations, functional analysis in reproducing kernel Hilbert spaces, operator splitting and optimization in Banach spaces. Several aspects that are relevant to black-box models will be studied including incorporation of prior knowledge, structured dynamical systems, tensorial data representations, interpretability and sparsity, and general purpose optimization algorithms. The methods should be suitable for handling larger data sets and high dimensional input spaces. The final goal is also to realize a next generation software tool (including symbolic generation of models and handling different supervised and unsupervised learning tasks, static and dynamic systems) that can be generically applied to data from different application areas. The proposal A-DATADRIVE-B aims at getting end-users connected to the more advanced methods through a user-friendly data-driven black-box modelling tool. The methods and tool will be tested in connection to several real-life applications.
Max ERC Funding
2 485 800 €
Duration
Start date: 2012-04-01, End date: 2017-03-31
Project acronym ABACUS
Project Advancing Behavioral and Cognitive Understanding of Speech
Researcher (PI) Bart De Boer
Host Institution (HI) VRIJE UNIVERSITEIT BRUSSEL
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary I intend to investigate what cognitive mechanisms give us combinatorial speech. Combinatorial speech is the ability to make new words using pre-existing speech sounds. Humans are the only apes that can do this, yet we do not know how our brains do it, nor how exactly we differ from other apes. Using new experimental techniques to study human behavior and new computational techniques to model human cognition, I will find out how we deal with combinatorial speech.
The experimental part will study individual and cultural learning. Experimental cultural learning is a new technique that simulates cultural evolution in the laboratory. Two types of cultural learning will be used: iterated learning, which simulates language transfer across generations, and social coordination, which simulates emergence of norms in a language community. Using the two types of cultural learning together with individual learning experiments will help to zero in, from three angles, on how humans deal with combinatorial speech. In addition it will make a methodological contribution by comparing the strengths and weaknesses of the three methods.
The computer modeling part will formalize hypotheses about how our brains deal with combinatorial speech. Two models will be built: a high-level model that will establish the basic algorithms with which combinatorial speech is learned and reproduced, and a neural model that will establish in more detail how the algorithms are implemented in the brain. In addition, the models, through increasing understanding of how humans deal with speech, will help bridge the performance gap between human and computer speech recognition.
The project will advance science in four ways: it will provide insight into how our unique ability for using combinatorial speech works, it will tell us how this is implemented in the brain, it will extend the novel methodology of experimental cultural learning and it will create new computer models for dealing with human speech.
Summary
I intend to investigate what cognitive mechanisms give us combinatorial speech. Combinatorial speech is the ability to make new words using pre-existing speech sounds. Humans are the only apes that can do this, yet we do not know how our brains do it, nor how exactly we differ from other apes. Using new experimental techniques to study human behavior and new computational techniques to model human cognition, I will find out how we deal with combinatorial speech.
The experimental part will study individual and cultural learning. Experimental cultural learning is a new technique that simulates cultural evolution in the laboratory. Two types of cultural learning will be used: iterated learning, which simulates language transfer across generations, and social coordination, which simulates emergence of norms in a language community. Using the two types of cultural learning together with individual learning experiments will help to zero in, from three angles, on how humans deal with combinatorial speech. In addition it will make a methodological contribution by comparing the strengths and weaknesses of the three methods.
The computer modeling part will formalize hypotheses about how our brains deal with combinatorial speech. Two models will be built: a high-level model that will establish the basic algorithms with which combinatorial speech is learned and reproduced, and a neural model that will establish in more detail how the algorithms are implemented in the brain. In addition, the models, through increasing understanding of how humans deal with speech, will help bridge the performance gap between human and computer speech recognition.
The project will advance science in four ways: it will provide insight into how our unique ability for using combinatorial speech works, it will tell us how this is implemented in the brain, it will extend the novel methodology of experimental cultural learning and it will create new computer models for dealing with human speech.
Max ERC Funding
1 276 620 €
Duration
Start date: 2012-02-01, End date: 2017-01-31
Project acronym EarlyDev
Project Brain networks for processing social signals of emotions: early development and the emergence of individual differences
Researcher (PI) Jukka Matias Leppänen
Host Institution (HI) TAMPEREEN YLIOPISTO
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary Recent research has shown that genetic variations in central serotonin function are associated with biases in emotional information processing (heightened attention to signals of negative emotion) and that these biases contribute significantly to vulnerability to affective disorders. Here, we propose to examine a novel hypothesis that the biases in attention to emotional cues are ontogenetically primary, arise very early in development, and modulate an individual’s interaction with the environment during development. The four specific aims of the project are to 1) test the hypothesis that developmental processes resulting in increased functional connectivity of visual and emotion/attention-related neural systems (i.e., increased phase-synchrony of oscillatory activity) from 5 to 7 months of age are associated with the emergence of an overt attentional bias towards affectively salient facial expressions at 7 months of age, 2) use eye-tracking to ascertain that the attentional bias in 7-month-old infants reflects sensitivity to the emotional signal value of facial expressions instead of correlated non-emotional features, 3) test the hypothesis that increased serotonergic tone early in life (through genetic polymorphisms or exposure to serotonin enhancing drugs) is associated with reduced control of attention to affectively salient facial expressions and reduced temperamental emotion-regulation at 7, 24 and 48 months of age, and 4) examine the plasticity of the attentional bias towards emotional facial expressions in infancy, particularly whether the bias can be overridden by using positive reinforcers. The proposed studies will be the first to explicate the neural bases and nature of early-emerging cognitive deficits and biases that pose a risk for emotional dysfunction. As such, the results will be very important for developing intervention methods that benefit of the plasticity of the developing brain and skill formation to support healthy development.
Summary
Recent research has shown that genetic variations in central serotonin function are associated with biases in emotional information processing (heightened attention to signals of negative emotion) and that these biases contribute significantly to vulnerability to affective disorders. Here, we propose to examine a novel hypothesis that the biases in attention to emotional cues are ontogenetically primary, arise very early in development, and modulate an individual’s interaction with the environment during development. The four specific aims of the project are to 1) test the hypothesis that developmental processes resulting in increased functional connectivity of visual and emotion/attention-related neural systems (i.e., increased phase-synchrony of oscillatory activity) from 5 to 7 months of age are associated with the emergence of an overt attentional bias towards affectively salient facial expressions at 7 months of age, 2) use eye-tracking to ascertain that the attentional bias in 7-month-old infants reflects sensitivity to the emotional signal value of facial expressions instead of correlated non-emotional features, 3) test the hypothesis that increased serotonergic tone early in life (through genetic polymorphisms or exposure to serotonin enhancing drugs) is associated with reduced control of attention to affectively salient facial expressions and reduced temperamental emotion-regulation at 7, 24 and 48 months of age, and 4) examine the plasticity of the attentional bias towards emotional facial expressions in infancy, particularly whether the bias can be overridden by using positive reinforcers. The proposed studies will be the first to explicate the neural bases and nature of early-emerging cognitive deficits and biases that pose a risk for emotional dysfunction. As such, the results will be very important for developing intervention methods that benefit of the plasticity of the developing brain and skill formation to support healthy development.
Max ERC Funding
1 397 351 €
Duration
Start date: 2012-02-01, End date: 2017-01-31
Project acronym facessvep
Project UNDERSTANDING THE NATURE OF FACE PERCEPTION: NEW INSIGHTS FROM STEADY-STATE VISUAL EVOKED POTENTIALS
Researcher (PI) Bruno Rossion
Host Institution (HI) UNIVERSITE CATHOLIQUE DE LOUVAIN
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary Face recognition is one of the most complex functions of the human mind/brain, so that no artificial device can surpass human abilities in this function. The goal of this project is to understand a fundamental aspect of face recognition, individual face perception: how, from sensory information, does the human mind/brain build a visual representation of a particular face? To clarify this question, I will introduce the method of steady-state visual evoked potentials (SSVEPs) in the field of face perception. This approach has never been applied to face perception, but we recently started using it and collected strong data demonstrating the feasibility of the approach. It is based on the repetitive stimulation of the visual system at a fixed frequency rate, and the recording on the human scalp of an electrical response (electroencephalogram, EEG) that oscillates at that specific frequency rate. Because of its extremely high signal-to-noise ratio and its non-ambiguity with respect to the measurement of the signal of interest, this method is ideal to assess the human brain’s sensitivity to facial identity, non-invasively, and with the exact same approach in normal adults, infants and children, as well as clinical populations. SSVEP will also allow “tagging” different features of a stimulus with different stimulation frequencies (“frequency-tagging” method), and thus measure the representation and processing of these features independently, as well as their potential integration. Overall, this proposal should shed light on understanding one of the most complex function of the human mind/brain, while its realization will undoubtedly generate relevant data and paradigms useful for understanding other aspects of face processing (e.g., perception of facial expression) and high-level visual perception processes in general.
Summary
Face recognition is one of the most complex functions of the human mind/brain, so that no artificial device can surpass human abilities in this function. The goal of this project is to understand a fundamental aspect of face recognition, individual face perception: how, from sensory information, does the human mind/brain build a visual representation of a particular face? To clarify this question, I will introduce the method of steady-state visual evoked potentials (SSVEPs) in the field of face perception. This approach has never been applied to face perception, but we recently started using it and collected strong data demonstrating the feasibility of the approach. It is based on the repetitive stimulation of the visual system at a fixed frequency rate, and the recording on the human scalp of an electrical response (electroencephalogram, EEG) that oscillates at that specific frequency rate. Because of its extremely high signal-to-noise ratio and its non-ambiguity with respect to the measurement of the signal of interest, this method is ideal to assess the human brain’s sensitivity to facial identity, non-invasively, and with the exact same approach in normal adults, infants and children, as well as clinical populations. SSVEP will also allow “tagging” different features of a stimulus with different stimulation frequencies (“frequency-tagging” method), and thus measure the representation and processing of these features independently, as well as their potential integration. Overall, this proposal should shed light on understanding one of the most complex function of the human mind/brain, while its realization will undoubtedly generate relevant data and paradigms useful for understanding other aspects of face processing (e.g., perception of facial expression) and high-level visual perception processes in general.
Max ERC Funding
1 490 360 €
Duration
Start date: 2012-02-01, End date: 2017-01-31
Project acronym ModularExperience
Project How the modularization of the mind unfolds in the brain
Researcher (PI) Hans Pieter P Op De Beeck
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary The mind is not an unitary entity, nor is its physical substrate, the brain. Both can be divided into multiple components, some of which have been referred to as modules. Many controversies exist in cognitive science, psychology, and philosophy about the properties and the status of these modules. A compromise view is offered by an hypothesis of modularization which has two central tenets: (i) Genetic influences determine a weak non-modular organization of the mind and (ii) this map develops into a set of module-like compartments. Here we will test this hypothesis in the domain of visual object knowledge. Testable predictions are derived from a novel extension and integration of previous proposals (i) for the presence of non-modular maps (Op de Beeck et al., 2008, Nature Rev. Neurosci.), which are logical candidates for the starting point proposed in the modularization hypothesis, and (ii) for how maps might be transformed by further experience (Op de Beeck & Baker, 2010, Trends in Cognit. Sci.) into a strong compartmentalization for specific types of visual stimuli. We will determine whether the same rules govern modularization for face perception and reading, despite the very different evolutionary history of faces and word stimuli. We will apply well-known analysis tools from the psychology literature, such as multidimensional scaling, to the patterns of activity obtained by brain imaging, so that we can directly compare the structure and modularity of visual processing in mental space with the structure of “brain space” (functional anatomy). The combined behavioral and imaging experiments will characterize the properties of non-modular maps and module-like regions in sighted and congenitally blind adults and in children, and test specific hypotheses about how experience affects non-modular maps and the degree of modularization. The findings will reveal how the structure of the adult mind is the dynamic end point of a process of modularization in the brain.
Summary
The mind is not an unitary entity, nor is its physical substrate, the brain. Both can be divided into multiple components, some of which have been referred to as modules. Many controversies exist in cognitive science, psychology, and philosophy about the properties and the status of these modules. A compromise view is offered by an hypothesis of modularization which has two central tenets: (i) Genetic influences determine a weak non-modular organization of the mind and (ii) this map develops into a set of module-like compartments. Here we will test this hypothesis in the domain of visual object knowledge. Testable predictions are derived from a novel extension and integration of previous proposals (i) for the presence of non-modular maps (Op de Beeck et al., 2008, Nature Rev. Neurosci.), which are logical candidates for the starting point proposed in the modularization hypothesis, and (ii) for how maps might be transformed by further experience (Op de Beeck & Baker, 2010, Trends in Cognit. Sci.) into a strong compartmentalization for specific types of visual stimuli. We will determine whether the same rules govern modularization for face perception and reading, despite the very different evolutionary history of faces and word stimuli. We will apply well-known analysis tools from the psychology literature, such as multidimensional scaling, to the patterns of activity obtained by brain imaging, so that we can directly compare the structure and modularity of visual processing in mental space with the structure of “brain space” (functional anatomy). The combined behavioral and imaging experiments will characterize the properties of non-modular maps and module-like regions in sighted and congenitally blind adults and in children, and test specific hypotheses about how experience affects non-modular maps and the degree of modularization. The findings will reveal how the structure of the adult mind is the dynamic end point of a process of modularization in the brain.
Max ERC Funding
1 474 800 €
Duration
Start date: 2012-06-01, End date: 2017-05-31