Project acronym BAYES OR BUST!
Project Bayes or Bust: Sensible Hypothesis Tests for Social Scientists
Researcher (PI) Eric-Jan Wagenmakers
Host Institution (HI) UNIVERSITEIT VAN AMSTERDAM
Country Netherlands
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary The goal of this proposal is to develop and promote Bayesian hypothesis tests for social scientists. By and large, social scientists have ignored the Bayesian revolution in statistics, and, consequently, most social scientists still assess the veracity of experimental effects using the same methodology that was used by their advisors and the advisors before them. This state of affairs is undesirable: social scientists conduct groundbreaking, innovative research only to analyze their results using methods that are old-fashioned or even inappropriate. This imbalance between the science and the statistics has gradually increased the pressure on the field to change the way inferences are drawn from their data. However, three requirements need to be fulfilled before social scientists are ready to adopt Bayesian tests of hypotheses. First, the Bayesian tests need to be developed for problems that social scientists work with on a regular basis; second, the Bayesian tests need to be default or objective; and, third, the Bayesian tests need to be available in a user-friendly computer program. This proposal seeks to make major progress on all three fronts.
Concretely, the projects in this proposal build on recent developments in the field of statistics and use the default Jeffreys-Zellner-Siow priors to compute Bayesian hypothesis tests for regression, correlation, the t-test, and different versions of analysis of variance (ANOVA). A similar approach will be used to develop Bayesian hypothesis tests for logistic regression and the analysis of contingency tables, as well as for popular latent process methods such as factor analysis and structural equation modeling. We aim to implement the various tests in a new computer program, Bayes-SPSS, with a similar look and feel as the frequentist spreadsheet program SPSS (i.e., Statistical Package for the Social Sciences). Together, these projects may help revolutionize the way social scientists analyze their data.
Summary
The goal of this proposal is to develop and promote Bayesian hypothesis tests for social scientists. By and large, social scientists have ignored the Bayesian revolution in statistics, and, consequently, most social scientists still assess the veracity of experimental effects using the same methodology that was used by their advisors and the advisors before them. This state of affairs is undesirable: social scientists conduct groundbreaking, innovative research only to analyze their results using methods that are old-fashioned or even inappropriate. This imbalance between the science and the statistics has gradually increased the pressure on the field to change the way inferences are drawn from their data. However, three requirements need to be fulfilled before social scientists are ready to adopt Bayesian tests of hypotheses. First, the Bayesian tests need to be developed for problems that social scientists work with on a regular basis; second, the Bayesian tests need to be default or objective; and, third, the Bayesian tests need to be available in a user-friendly computer program. This proposal seeks to make major progress on all three fronts.
Concretely, the projects in this proposal build on recent developments in the field of statistics and use the default Jeffreys-Zellner-Siow priors to compute Bayesian hypothesis tests for regression, correlation, the t-test, and different versions of analysis of variance (ANOVA). A similar approach will be used to develop Bayesian hypothesis tests for logistic regression and the analysis of contingency tables, as well as for popular latent process methods such as factor analysis and structural equation modeling. We aim to implement the various tests in a new computer program, Bayes-SPSS, with a similar look and feel as the frequentist spreadsheet program SPSS (i.e., Statistical Package for the Social Sciences). Together, these projects may help revolutionize the way social scientists analyze their data.
Max ERC Funding
1 498 286 €
Duration
Start date: 2012-05-01, End date: 2017-04-30
Project acronym CASAA
Project Catalytic asymmetric synthesis of amines and amides
Researcher (PI) Jeffrey William Bode
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Country Switzerland
Call Details Starting Grant (StG), PE5, ERC-2012-StG_20111012
Summary "Amines and their acylated derivatives – amides – are among the most common chemical functional groups found in modern pharmaceuticals. Despite this there are few methods for their efficient, environmentally sustainable production in enantiomerically pure form. This proposal seeks to provide new catalytic chemical methods including 1) the catalytic, enantioselective synthesis of peptides and 2) catalytic methods for the preparation of enantiopure nitrogen-containing heterocycles. The proposed work features innovative chemistry including novel reaction mechanism and catalysts. These methods have far reaching applications for the sustainable production of valuable compounds as well as fundamental science."
Summary
"Amines and their acylated derivatives – amides – are among the most common chemical functional groups found in modern pharmaceuticals. Despite this there are few methods for their efficient, environmentally sustainable production in enantiomerically pure form. This proposal seeks to provide new catalytic chemical methods including 1) the catalytic, enantioselective synthesis of peptides and 2) catalytic methods for the preparation of enantiopure nitrogen-containing heterocycles. The proposed work features innovative chemistry including novel reaction mechanism and catalysts. These methods have far reaching applications for the sustainable production of valuable compounds as well as fundamental science."
Max ERC Funding
1 500 000 €
Duration
Start date: 2012-12-01, End date: 2017-11-30
Project acronym CASINO
Project Carbohydrate signals controlling nodulation
Researcher (PI) Jens Stougaard Jensen
Host Institution (HI) AARHUS UNIVERSITET
Country Denmark
Call Details Advanced Grant (AdG), LS3, ERC-2010-AdG_20100317
Summary Mechanisms governing interaction between multicellular organisms and microbes are central for understanding pathogenesis, symbiosis and the function of ecosystems. We propose to address these mechanisms by pioneering an interdisciplinary approach for understanding cellular signalling, response processes and organ development. The challenge is to determine factors synchronising three processes, organogenesis, infection thread formation and bacterial infection, running in parallel to build a root nodule hosting symbiotic bacteria. We aim to exploit the unique possibilities for analysing endocytosis of bacteria in model legumes and to develop genomic, genetic and biological chemistry tools to break new ground in our understanding of carbohydrates in plant development and plant-microbe interaction. Surface exposed rhizobial polysaccharides play a crucial but poorly understood role in infection thread formation and rhizobial invasion resulting in endocytosis. We will undertake an integrated functional characterisation of receptor-ligand mechanisms mediating recognition of secreted polysaccharides and subsequent signal amplification. So far progress in this field has been limited by the complex nature of carbohydrate polymers, lack of a suitable experimental model system where both partners in an interaction could be manipulated and lack of corresponding methods for carbohydrate synthesis, analysis and interaction studies. In this context our legume model system and the discovery that the legume Nod-factor receptors recognise bacterial lipochitin-oligosaccharide signals at their LysM domains provides a new opportunity. Combined with advanced bioorganic chemistry and nanobioscience approaches this proposal will engage the above mentioned limitations.
Summary
Mechanisms governing interaction between multicellular organisms and microbes are central for understanding pathogenesis, symbiosis and the function of ecosystems. We propose to address these mechanisms by pioneering an interdisciplinary approach for understanding cellular signalling, response processes and organ development. The challenge is to determine factors synchronising three processes, organogenesis, infection thread formation and bacterial infection, running in parallel to build a root nodule hosting symbiotic bacteria. We aim to exploit the unique possibilities for analysing endocytosis of bacteria in model legumes and to develop genomic, genetic and biological chemistry tools to break new ground in our understanding of carbohydrates in plant development and plant-microbe interaction. Surface exposed rhizobial polysaccharides play a crucial but poorly understood role in infection thread formation and rhizobial invasion resulting in endocytosis. We will undertake an integrated functional characterisation of receptor-ligand mechanisms mediating recognition of secreted polysaccharides and subsequent signal amplification. So far progress in this field has been limited by the complex nature of carbohydrate polymers, lack of a suitable experimental model system where both partners in an interaction could be manipulated and lack of corresponding methods for carbohydrate synthesis, analysis and interaction studies. In this context our legume model system and the discovery that the legume Nod-factor receptors recognise bacterial lipochitin-oligosaccharide signals at their LysM domains provides a new opportunity. Combined with advanced bioorganic chemistry and nanobioscience approaches this proposal will engage the above mentioned limitations.
Max ERC Funding
2 399 127 €
Duration
Start date: 2011-05-01, End date: 2016-04-30
Project acronym CYTRIX
Project Engineering Cytokines for Super-Affinity Binding to Matrix in Regenerative Medicine
Researcher (PI) Jeffrey Alan Hubbell
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Country Switzerland
Call Details Advanced Grant (AdG), LS7, ERC-2013-ADG
Summary In physiological situations, the extracellular matrix (ECM) sequesters cytokines, localizes them, and modulates their signaling. Thus, physiological signaling from cytokines occurs primarily when the cytokines are interacting with the ECM. In therapeutic use of cytokines, however, this interaction and balance have not been respected; rather the growth factors are merely injected or applied as soluble molecules, perhaps in controlled release forms. This has led to modest efficacy and substantial concerns on safety. Here, we will develop a protein engineering design for second-generation cytokines to lead to their super-affinity binding to ECM molecules in the targeted tissues; this would allow application to a tissue site to yield a tight association with ECM molecules there, turning the tissue itself into a reservoir for cytokine sequestration and presentation. To accomplish this, we have undertaken preliminary work screening a library of cytokines for extraordinarily high affinity binding to a library of ECM molecules. We have thereby identified a small peptide domain within placental growth factor-2 (PlGF-2), namely PlGF-2123-144, that displays super-affinity for a number of ECM proteins. Also in preliminary work, we have demonstrated that recombinant fusion of this domain to low-affinity binding cytokines, namely VEGF-A, PDGF-BB and BMP-2, confers super-affinity binding to ECM molecules and accentuates their functionality in vivo in regenerative medicine models. In the proposed project, based on this preliminary data, we will push forward this protein engineering design, pursuing super-affinity variants of VEGF-A and PDGF-BB in chronic wounds, TGF-beta3 and CXCL11 in skin scar reduction, FGF-18 in osteoarthritic cartilage repair and CXCL12 in stem cell recruitment to ischemic cardiac muscle. Thus, we seek to demonstrate a fundamentally new concept and platform for second-generation growth factor protein engineering.
Summary
In physiological situations, the extracellular matrix (ECM) sequesters cytokines, localizes them, and modulates their signaling. Thus, physiological signaling from cytokines occurs primarily when the cytokines are interacting with the ECM. In therapeutic use of cytokines, however, this interaction and balance have not been respected; rather the growth factors are merely injected or applied as soluble molecules, perhaps in controlled release forms. This has led to modest efficacy and substantial concerns on safety. Here, we will develop a protein engineering design for second-generation cytokines to lead to their super-affinity binding to ECM molecules in the targeted tissues; this would allow application to a tissue site to yield a tight association with ECM molecules there, turning the tissue itself into a reservoir for cytokine sequestration and presentation. To accomplish this, we have undertaken preliminary work screening a library of cytokines for extraordinarily high affinity binding to a library of ECM molecules. We have thereby identified a small peptide domain within placental growth factor-2 (PlGF-2), namely PlGF-2123-144, that displays super-affinity for a number of ECM proteins. Also in preliminary work, we have demonstrated that recombinant fusion of this domain to low-affinity binding cytokines, namely VEGF-A, PDGF-BB and BMP-2, confers super-affinity binding to ECM molecules and accentuates their functionality in vivo in regenerative medicine models. In the proposed project, based on this preliminary data, we will push forward this protein engineering design, pursuing super-affinity variants of VEGF-A and PDGF-BB in chronic wounds, TGF-beta3 and CXCL11 in skin scar reduction, FGF-18 in osteoarthritic cartilage repair and CXCL12 in stem cell recruitment to ischemic cardiac muscle. Thus, we seek to demonstrate a fundamentally new concept and platform for second-generation growth factor protein engineering.
Max ERC Funding
2 368 170 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym DETECT
Project Describing Evolution with Theoretical, Empirical, and Computational Tools
Researcher (PI) Jeffrey Jensen
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Country Switzerland
Call Details Starting Grant (StG), LS8, ERC-2012-StG_20111109
Summary As evolutionary biologists we are of course motivated by the desire to gain further insight in to the evolution of natural populations. The main goals of this proposal are to (i) develop theory and methodology that will enable the identification of adaptively evolving genomic regions using polymorphism data, (ii) develop theory and methodology for the estimation of whole-genome rates of adaptive evolution, and (iii) apply the developed theory in two strategic collaborative applications. Capitalizing on recently available and soon-to-be available whole genome polymorphism data across multiple taxa, these approaches are expected to significantly improve the identification and localization of recent selective events, as well as provide long sought after information regarding the genomic distributions of selective effects. Additionally, through these on-going collaborations with empirical and experimental labs, this methodology will allow for specific hypothesis testing that will further illuminate classical examples of adaptation. Together, this proposal seeks to Describe Evolution with Theoretical, Empirical and Computational Tools (DETECT), seeking to accurately describe the very mode and tempo of Darwinian adaptation.
Summary
As evolutionary biologists we are of course motivated by the desire to gain further insight in to the evolution of natural populations. The main goals of this proposal are to (i) develop theory and methodology that will enable the identification of adaptively evolving genomic regions using polymorphism data, (ii) develop theory and methodology for the estimation of whole-genome rates of adaptive evolution, and (iii) apply the developed theory in two strategic collaborative applications. Capitalizing on recently available and soon-to-be available whole genome polymorphism data across multiple taxa, these approaches are expected to significantly improve the identification and localization of recent selective events, as well as provide long sought after information regarding the genomic distributions of selective effects. Additionally, through these on-going collaborations with empirical and experimental labs, this methodology will allow for specific hypothesis testing that will further illuminate classical examples of adaptation. Together, this proposal seeks to Describe Evolution with Theoretical, Empirical and Computational Tools (DETECT), seeking to accurately describe the very mode and tempo of Darwinian adaptation.
Max ERC Funding
1 071 729 €
Duration
Start date: 2013-01-01, End date: 2017-08-31
Project acronym ESSOG
Project Extracting science from surveys of our Galaxy
Researcher (PI) James Jeffrey Binney
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Advanced Grant (AdG), PE9, ERC-2012-ADG_20120216
Summary "The goal is to put in place the infrastructure required to extract the promised science for large surveys of our Galaxy that are underway and will culminate in ESA's Cornerstone Mission Gaia. Dynamical models are fundamental to this process because surveys are heavily biased by the Sun's location in the Galaxy. Novel dynamical models will be built and novel methods of fitting them to the data developed. With their help we will be able to constrain the distribution of dark matter in the Galaxy. By modelling the chemical and dynamical evolution of the Galaxy we expect to be able to infer much information about how the Galaxy was assembled, and thus test the prevailing cosmological paradigm. During the grant period we will be applying our tools to ground-based surveys, but the first version of the Gaia Catalogue will become available at the end of the grant period, and our goal is to have everything ready and tested for its prompt exploitation."
Summary
"The goal is to put in place the infrastructure required to extract the promised science for large surveys of our Galaxy that are underway and will culminate in ESA's Cornerstone Mission Gaia. Dynamical models are fundamental to this process because surveys are heavily biased by the Sun's location in the Galaxy. Novel dynamical models will be built and novel methods of fitting them to the data developed. With their help we will be able to constrain the distribution of dark matter in the Galaxy. By modelling the chemical and dynamical evolution of the Galaxy we expect to be able to infer much information about how the Galaxy was assembled, and thus test the prevailing cosmological paradigm. During the grant period we will be applying our tools to ground-based surveys, but the first version of the Gaia Catalogue will become available at the end of the grant period, and our goal is to have everything ready and tested for its prompt exploitation."
Max ERC Funding
1 954 460 €
Duration
Start date: 2013-04-01, End date: 2018-03-31
Project acronym EURECA
Project Eukaryotic Regulated RNA Catabolism
Researcher (PI) Torben Heick Jensen
Host Institution (HI) AARHUS UNIVERSITET
Country Denmark
Call Details Advanced Grant (AdG), LS1, ERC-2013-ADG
Summary "Regulation and fidelity of gene expression is fundamental to the differentiation and maintenance of all living organisms. While historically attention has been focused on the process of transcriptional activation, we predict that RNA turnover pathways are equally important for gene expression regulation. This has been implied for selected protein-coding RNAs (mRNAs) but is virtually unexplored for non-protein-coding RNAs (ncRNAs).
The intention of the EURECA proposal is to establish cutting-edge research to characterize mammalian nuclear RNA turnover; its factor utility, substrate specificity and regulatory capacity. We foresee that RNA turnover is at the core of gene expression regulation - forming intricate connection to RNA productive systems – thus, being centrally placed to determine RNA fate. EURECA seeks to dramatically improve our understanding of cellular decision processes impacting RNA levels and to establish models for how regulated RNA turnover helps control key biological processes.
The realization that the number of ncRNA producing genes was previously grossly underestimated foretells that ncRNA regulation will impact on most aspects of cell biology. Consistently, aberrant ncRNA levels correlate with human disease phenotypes and RNA turnover complexes are linked to disease biology. Still, solid models for how ncRNA turnover regulate biological processes in higher eukaryotes are not available. Moreover, which ncRNAs retain function and which are merely transcriptional by-products remain a major challenge to sort out. The circumstances and kinetics of ncRNA turnover are therefore important to delineate as these will ultimately relate to the likelihood of molecular function. A fundamental challenge here is to also discern which protein complements of non-coding ribonucleoprotein particles (ncRNPs) are (in)compatible with function. Balancing single transcript/factor analysis with high-throughput methodology, EURECA will address these questions."
Summary
"Regulation and fidelity of gene expression is fundamental to the differentiation and maintenance of all living organisms. While historically attention has been focused on the process of transcriptional activation, we predict that RNA turnover pathways are equally important for gene expression regulation. This has been implied for selected protein-coding RNAs (mRNAs) but is virtually unexplored for non-protein-coding RNAs (ncRNAs).
The intention of the EURECA proposal is to establish cutting-edge research to characterize mammalian nuclear RNA turnover; its factor utility, substrate specificity and regulatory capacity. We foresee that RNA turnover is at the core of gene expression regulation - forming intricate connection to RNA productive systems – thus, being centrally placed to determine RNA fate. EURECA seeks to dramatically improve our understanding of cellular decision processes impacting RNA levels and to establish models for how regulated RNA turnover helps control key biological processes.
The realization that the number of ncRNA producing genes was previously grossly underestimated foretells that ncRNA regulation will impact on most aspects of cell biology. Consistently, aberrant ncRNA levels correlate with human disease phenotypes and RNA turnover complexes are linked to disease biology. Still, solid models for how ncRNA turnover regulate biological processes in higher eukaryotes are not available. Moreover, which ncRNAs retain function and which are merely transcriptional by-products remain a major challenge to sort out. The circumstances and kinetics of ncRNA turnover are therefore important to delineate as these will ultimately relate to the likelihood of molecular function. A fundamental challenge here is to also discern which protein complements of non-coding ribonucleoprotein particles (ncRNPs) are (in)compatible with function. Balancing single transcript/factor analysis with high-throughput methodology, EURECA will address these questions."
Max ERC Funding
2 497 960 €
Duration
Start date: 2014-04-01, End date: 2019-03-31
Project acronym GENEWELL
Project Genetics and epigenetics of animal welfare
Researcher (PI) Per Ole Stokmann Jensen
Host Institution (HI) LINKOPINGS UNIVERSITET
Country Sweden
Call Details Advanced Grant (AdG), LS9, ERC-2012-ADG_20120314
Summary Animal welfare is a topic of highest societal and scientific priority. Here, I propose to use genomic and epigenetic tools to provide a new perspective on the biology of animal welfare. This will reveal mechanisms involved in modulating stress responses. Groundbreaking aspects include new insights into how environmental conditions shape the orchestration of the genome by means of epigenetic mechanisms, and how this in turn modulates coping patterns of animals. The flexible epigenome comprises the interface between the environment and the genome. It is involved in both short- and long-term, including transgenerational, adaptations of animals. Hence, populations may adapt to environmental conditions over generations, using epigenetic mechanisms. The project will primarily be based on chickens, but will also be extended to a novel species, the dog. We will generate congenic chicken strains, where interesting alleles and epialleles will be fixed against a common background of either RJF or domestic genotypes. In these, we will apply a broad phenotyping strategy, to characterize the effects on different welfare relevant behaviors. Furthermore, we will characterize how environmental stress affects the epigenome of birds, and tissue samples from more than 500 birds from an intercross between RJF and White Leghorn layers will be used to perform an extensive meth-QTL-analysis. This will reveal environmental and genetic mechanisms affecting gene-specific methylation. The dog is another highly interesting species in the context of behavior genetics, because of its high inter-breed variation in behavior, and its compact and sequenced genome. We will set up a large-scale F2-intercross experiment and phenotype about 400 dogs in standardized behavioral tests. All individuals will be genotyped on about 1000 genetic markers, and this will be used for performing an extensive QTL-analysis in order to find new loci and alleles associated with personalities and coping patterns.
Summary
Animal welfare is a topic of highest societal and scientific priority. Here, I propose to use genomic and epigenetic tools to provide a new perspective on the biology of animal welfare. This will reveal mechanisms involved in modulating stress responses. Groundbreaking aspects include new insights into how environmental conditions shape the orchestration of the genome by means of epigenetic mechanisms, and how this in turn modulates coping patterns of animals. The flexible epigenome comprises the interface between the environment and the genome. It is involved in both short- and long-term, including transgenerational, adaptations of animals. Hence, populations may adapt to environmental conditions over generations, using epigenetic mechanisms. The project will primarily be based on chickens, but will also be extended to a novel species, the dog. We will generate congenic chicken strains, where interesting alleles and epialleles will be fixed against a common background of either RJF or domestic genotypes. In these, we will apply a broad phenotyping strategy, to characterize the effects on different welfare relevant behaviors. Furthermore, we will characterize how environmental stress affects the epigenome of birds, and tissue samples from more than 500 birds from an intercross between RJF and White Leghorn layers will be used to perform an extensive meth-QTL-analysis. This will reveal environmental and genetic mechanisms affecting gene-specific methylation. The dog is another highly interesting species in the context of behavior genetics, because of its high inter-breed variation in behavior, and its compact and sequenced genome. We will set up a large-scale F2-intercross experiment and phenotype about 400 dogs in standardized behavioral tests. All individuals will be genotyped on about 1000 genetic markers, and this will be used for performing an extensive QTL-analysis in order to find new loci and alleles associated with personalities and coping patterns.
Max ERC Funding
2 499 828 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym HETMAT
Project Heterogeneity That Matters for Trade and Welfare
Researcher (PI) Thierry Mayer
Host Institution (HI) FONDATION NATIONALE DES SCIENCES POLITIQUES
Country France
Call Details Starting Grant (StG), SH1, ERC-2012-StG_20111124
Summary Accounting for firms' heterogeneity in trade patterns is probably one of the key innovations of international trade that occurred during the last decade. The impact of initial papers such as Melitz (2003) and Bernard and Jensen (1999) is so large in the field that it is considered to have introduced a new paradigm. Apart from providing a convincing framework for a set of empirical facts, the main motivation of this literature was that there are new gains to be expected from trade liberalization. Those come from a selection process, raising aggregate productivity through the reallocation of output among heterogeneous firms. It initially seemed that the information requirements for trade policy evaluations had become much more demanding, in particular requiring detailed micro data. However, the recent work of Arkolakis et al. (2011) suggests that two aggregate ``sufficient statistics'' may be all that is needed to compute the welfare changes associated with trade liberalization. More, they show that those statistics are the same when evaluating welfare changes in representative firm models. The project has three parts. The first one starts by showing that the sufficient statistics approach relies crucially on a specific distributional assumption on heterogeneity, the Pareto distribution. When distributed non-Pareto, heterogeneity does matter, i.e. aggregate statistics are not sufficient to evaluate welfare changes and predict trade patterns. The second part of the project specifies which type of firm-level heterogeneity matters. It shows how to identify which sectors are characterized by ``productivity sorting'' and in which ones ``quality sorting'' is more relevant. Extending the analysis to multiple product firms, the third part shows that heterogeneity inside the firm also matters for welfare changes following trade shocks. It considers how the change in the product mix of the firm following trade liberalization alters the measured productivity of the firm.
Summary
Accounting for firms' heterogeneity in trade patterns is probably one of the key innovations of international trade that occurred during the last decade. The impact of initial papers such as Melitz (2003) and Bernard and Jensen (1999) is so large in the field that it is considered to have introduced a new paradigm. Apart from providing a convincing framework for a set of empirical facts, the main motivation of this literature was that there are new gains to be expected from trade liberalization. Those come from a selection process, raising aggregate productivity through the reallocation of output among heterogeneous firms. It initially seemed that the information requirements for trade policy evaluations had become much more demanding, in particular requiring detailed micro data. However, the recent work of Arkolakis et al. (2011) suggests that two aggregate ``sufficient statistics'' may be all that is needed to compute the welfare changes associated with trade liberalization. More, they show that those statistics are the same when evaluating welfare changes in representative firm models. The project has three parts. The first one starts by showing that the sufficient statistics approach relies crucially on a specific distributional assumption on heterogeneity, the Pareto distribution. When distributed non-Pareto, heterogeneity does matter, i.e. aggregate statistics are not sufficient to evaluate welfare changes and predict trade patterns. The second part of the project specifies which type of firm-level heterogeneity matters. It shows how to identify which sectors are characterized by ``productivity sorting'' and in which ones ``quality sorting'' is more relevant. Extending the analysis to multiple product firms, the third part shows that heterogeneity inside the firm also matters for welfare changes following trade shocks. It considers how the change in the product mix of the firm following trade liberalization alters the measured productivity of the firm.
Max ERC Funding
1 119 040 €
Duration
Start date: 2012-11-01, End date: 2018-07-31
Project acronym M and M
Project Generalization in Mind and Machine
Researcher (PI) jeffrey BOWERS
Host Institution (HI) UNIVERSITY OF BRISTOL
Country United Kingdom
Call Details Advanced Grant (AdG), SH4, ERC-2016-ADG
Summary Is the human mind a symbolic computational device? This issue was at the core Chomsky’s critique of Skinner in the 1960s, and motivated the debates regarding Parallel Distributed Processing models developed in the 1980s. The recent successes of “deep” networks make this issue topical for psychology and neuroscience, and it raises the question of whether symbols are needed for artificial intelligence more generally.
One of the innovations of the current project is to identify simple empirical phenomena that will serve a critical test-bed for both symbolic and non-symbolic neural networks. In order to make substantial progress on this issue a series of empirical and computational investigations are organised as follows. First, studies focus on tasks that, according to proponents of symbolic systems, require symbols for the sake of generalisation. Accordingly, if non-symbolic networks succeed, it would undermine one of the main motivations for symbolic systems. Second, studies focus on generalisation in tasks in which human performance is well characterised. Accordingly, the research will provide important constraints for theories of cognition across a range of domains, including vision, memory, and reasoning. Third, studies develop new learning algorithms designed to make symbolic systems biologically plausible. One of the reasons why symbolic networks are often dismissed is the claim that they are not as biologically plausible as non-symbolic models. This last ambition is the most high-risk but also potentially the most important: Introducing new computational principles may fundamentally advance our understanding of how the brain learns and computes, and furthermore, these principles may increase the computational powers of networks in ways that are important for engineering and artificial intelligence.
Summary
Is the human mind a symbolic computational device? This issue was at the core Chomsky’s critique of Skinner in the 1960s, and motivated the debates regarding Parallel Distributed Processing models developed in the 1980s. The recent successes of “deep” networks make this issue topical for psychology and neuroscience, and it raises the question of whether symbols are needed for artificial intelligence more generally.
One of the innovations of the current project is to identify simple empirical phenomena that will serve a critical test-bed for both symbolic and non-symbolic neural networks. In order to make substantial progress on this issue a series of empirical and computational investigations are organised as follows. First, studies focus on tasks that, according to proponents of symbolic systems, require symbols for the sake of generalisation. Accordingly, if non-symbolic networks succeed, it would undermine one of the main motivations for symbolic systems. Second, studies focus on generalisation in tasks in which human performance is well characterised. Accordingly, the research will provide important constraints for theories of cognition across a range of domains, including vision, memory, and reasoning. Third, studies develop new learning algorithms designed to make symbolic systems biologically plausible. One of the reasons why symbolic networks are often dismissed is the claim that they are not as biologically plausible as non-symbolic models. This last ambition is the most high-risk but also potentially the most important: Introducing new computational principles may fundamentally advance our understanding of how the brain learns and computes, and furthermore, these principles may increase the computational powers of networks in ways that are important for engineering and artificial intelligence.
Max ERC Funding
2 495 578 €
Duration
Start date: 2017-09-01, End date: 2022-08-31