Project acronym BAYES OR BUST!
Project Bayes or Bust: Sensible Hypothesis Tests for Social Scientists
Researcher (PI) Eric-Jan Wagenmakers
Host Institution (HI) UNIVERSITEIT VAN AMSTERDAM
Country Netherlands
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary The goal of this proposal is to develop and promote Bayesian hypothesis tests for social scientists. By and large, social scientists have ignored the Bayesian revolution in statistics, and, consequently, most social scientists still assess the veracity of experimental effects using the same methodology that was used by their advisors and the advisors before them. This state of affairs is undesirable: social scientists conduct groundbreaking, innovative research only to analyze their results using methods that are old-fashioned or even inappropriate. This imbalance between the science and the statistics has gradually increased the pressure on the field to change the way inferences are drawn from their data. However, three requirements need to be fulfilled before social scientists are ready to adopt Bayesian tests of hypotheses. First, the Bayesian tests need to be developed for problems that social scientists work with on a regular basis; second, the Bayesian tests need to be default or objective; and, third, the Bayesian tests need to be available in a user-friendly computer program. This proposal seeks to make major progress on all three fronts.
Concretely, the projects in this proposal build on recent developments in the field of statistics and use the default Jeffreys-Zellner-Siow priors to compute Bayesian hypothesis tests for regression, correlation, the t-test, and different versions of analysis of variance (ANOVA). A similar approach will be used to develop Bayesian hypothesis tests for logistic regression and the analysis of contingency tables, as well as for popular latent process methods such as factor analysis and structural equation modeling. We aim to implement the various tests in a new computer program, Bayes-SPSS, with a similar look and feel as the frequentist spreadsheet program SPSS (i.e., Statistical Package for the Social Sciences). Together, these projects may help revolutionize the way social scientists analyze their data.
Summary
The goal of this proposal is to develop and promote Bayesian hypothesis tests for social scientists. By and large, social scientists have ignored the Bayesian revolution in statistics, and, consequently, most social scientists still assess the veracity of experimental effects using the same methodology that was used by their advisors and the advisors before them. This state of affairs is undesirable: social scientists conduct groundbreaking, innovative research only to analyze their results using methods that are old-fashioned or even inappropriate. This imbalance between the science and the statistics has gradually increased the pressure on the field to change the way inferences are drawn from their data. However, three requirements need to be fulfilled before social scientists are ready to adopt Bayesian tests of hypotheses. First, the Bayesian tests need to be developed for problems that social scientists work with on a regular basis; second, the Bayesian tests need to be default or objective; and, third, the Bayesian tests need to be available in a user-friendly computer program. This proposal seeks to make major progress on all three fronts.
Concretely, the projects in this proposal build on recent developments in the field of statistics and use the default Jeffreys-Zellner-Siow priors to compute Bayesian hypothesis tests for regression, correlation, the t-test, and different versions of analysis of variance (ANOVA). A similar approach will be used to develop Bayesian hypothesis tests for logistic regression and the analysis of contingency tables, as well as for popular latent process methods such as factor analysis and structural equation modeling. We aim to implement the various tests in a new computer program, Bayes-SPSS, with a similar look and feel as the frequentist spreadsheet program SPSS (i.e., Statistical Package for the Social Sciences). Together, these projects may help revolutionize the way social scientists analyze their data.
Max ERC Funding
1 498 286 €
Duration
Start date: 2012-05-01, End date: 2017-04-30
Project acronym HETMAT
Project Heterogeneity That Matters for Trade and Welfare
Researcher (PI) Thierry Mayer
Host Institution (HI) FONDATION NATIONALE DES SCIENCES POLITIQUES
Country France
Call Details Starting Grant (StG), SH1, ERC-2012-StG_20111124
Summary Accounting for firms' heterogeneity in trade patterns is probably one of the key innovations of international trade that occurred during the last decade. The impact of initial papers such as Melitz (2003) and Bernard and Jensen (1999) is so large in the field that it is considered to have introduced a new paradigm. Apart from providing a convincing framework for a set of empirical facts, the main motivation of this literature was that there are new gains to be expected from trade liberalization. Those come from a selection process, raising aggregate productivity through the reallocation of output among heterogeneous firms. It initially seemed that the information requirements for trade policy evaluations had become much more demanding, in particular requiring detailed micro data. However, the recent work of Arkolakis et al. (2011) suggests that two aggregate ``sufficient statistics'' may be all that is needed to compute the welfare changes associated with trade liberalization. More, they show that those statistics are the same when evaluating welfare changes in representative firm models. The project has three parts. The first one starts by showing that the sufficient statistics approach relies crucially on a specific distributional assumption on heterogeneity, the Pareto distribution. When distributed non-Pareto, heterogeneity does matter, i.e. aggregate statistics are not sufficient to evaluate welfare changes and predict trade patterns. The second part of the project specifies which type of firm-level heterogeneity matters. It shows how to identify which sectors are characterized by ``productivity sorting'' and in which ones ``quality sorting'' is more relevant. Extending the analysis to multiple product firms, the third part shows that heterogeneity inside the firm also matters for welfare changes following trade shocks. It considers how the change in the product mix of the firm following trade liberalization alters the measured productivity of the firm.
Summary
Accounting for firms' heterogeneity in trade patterns is probably one of the key innovations of international trade that occurred during the last decade. The impact of initial papers such as Melitz (2003) and Bernard and Jensen (1999) is so large in the field that it is considered to have introduced a new paradigm. Apart from providing a convincing framework for a set of empirical facts, the main motivation of this literature was that there are new gains to be expected from trade liberalization. Those come from a selection process, raising aggregate productivity through the reallocation of output among heterogeneous firms. It initially seemed that the information requirements for trade policy evaluations had become much more demanding, in particular requiring detailed micro data. However, the recent work of Arkolakis et al. (2011) suggests that two aggregate ``sufficient statistics'' may be all that is needed to compute the welfare changes associated with trade liberalization. More, they show that those statistics are the same when evaluating welfare changes in representative firm models. The project has three parts. The first one starts by showing that the sufficient statistics approach relies crucially on a specific distributional assumption on heterogeneity, the Pareto distribution. When distributed non-Pareto, heterogeneity does matter, i.e. aggregate statistics are not sufficient to evaluate welfare changes and predict trade patterns. The second part of the project specifies which type of firm-level heterogeneity matters. It shows how to identify which sectors are characterized by ``productivity sorting'' and in which ones ``quality sorting'' is more relevant. Extending the analysis to multiple product firms, the third part shows that heterogeneity inside the firm also matters for welfare changes following trade shocks. It considers how the change in the product mix of the firm following trade liberalization alters the measured productivity of the firm.
Max ERC Funding
1 119 040 €
Duration
Start date: 2012-11-01, End date: 2018-07-31
Project acronym M and M
Project Generalization in Mind and Machine
Researcher (PI) jeffrey BOWERS
Host Institution (HI) UNIVERSITY OF BRISTOL
Country United Kingdom
Call Details Advanced Grant (AdG), SH4, ERC-2016-ADG
Summary Is the human mind a symbolic computational device? This issue was at the core Chomsky’s critique of Skinner in the 1960s, and motivated the debates regarding Parallel Distributed Processing models developed in the 1980s. The recent successes of “deep” networks make this issue topical for psychology and neuroscience, and it raises the question of whether symbols are needed for artificial intelligence more generally.
One of the innovations of the current project is to identify simple empirical phenomena that will serve a critical test-bed for both symbolic and non-symbolic neural networks. In order to make substantial progress on this issue a series of empirical and computational investigations are organised as follows. First, studies focus on tasks that, according to proponents of symbolic systems, require symbols for the sake of generalisation. Accordingly, if non-symbolic networks succeed, it would undermine one of the main motivations for symbolic systems. Second, studies focus on generalisation in tasks in which human performance is well characterised. Accordingly, the research will provide important constraints for theories of cognition across a range of domains, including vision, memory, and reasoning. Third, studies develop new learning algorithms designed to make symbolic systems biologically plausible. One of the reasons why symbolic networks are often dismissed is the claim that they are not as biologically plausible as non-symbolic models. This last ambition is the most high-risk but also potentially the most important: Introducing new computational principles may fundamentally advance our understanding of how the brain learns and computes, and furthermore, these principles may increase the computational powers of networks in ways that are important for engineering and artificial intelligence.
Summary
Is the human mind a symbolic computational device? This issue was at the core Chomsky’s critique of Skinner in the 1960s, and motivated the debates regarding Parallel Distributed Processing models developed in the 1980s. The recent successes of “deep” networks make this issue topical for psychology and neuroscience, and it raises the question of whether symbols are needed for artificial intelligence more generally.
One of the innovations of the current project is to identify simple empirical phenomena that will serve a critical test-bed for both symbolic and non-symbolic neural networks. In order to make substantial progress on this issue a series of empirical and computational investigations are organised as follows. First, studies focus on tasks that, according to proponents of symbolic systems, require symbols for the sake of generalisation. Accordingly, if non-symbolic networks succeed, it would undermine one of the main motivations for symbolic systems. Second, studies focus on generalisation in tasks in which human performance is well characterised. Accordingly, the research will provide important constraints for theories of cognition across a range of domains, including vision, memory, and reasoning. Third, studies develop new learning algorithms designed to make symbolic systems biologically plausible. One of the reasons why symbolic networks are often dismissed is the claim that they are not as biologically plausible as non-symbolic models. This last ambition is the most high-risk but also potentially the most important: Introducing new computational principles may fundamentally advance our understanding of how the brain learns and computes, and furthermore, these principles may increase the computational powers of networks in ways that are important for engineering and artificial intelligence.
Max ERC Funding
2 495 578 €
Duration
Start date: 2017-09-01, End date: 2022-08-31