Project acronym BAYES OR BUST!
Project Bayes or Bust: Sensible Hypothesis Tests for Social Scientists
Researcher (PI) Eric-Jan Wagenmakers
Host Institution (HI) UNIVERSITEIT VAN AMSTERDAM
Country Netherlands
Call Details Starting Grant (StG), SH4, ERC-2011-StG_20101124
Summary The goal of this proposal is to develop and promote Bayesian hypothesis tests for social scientists. By and large, social scientists have ignored the Bayesian revolution in statistics, and, consequently, most social scientists still assess the veracity of experimental effects using the same methodology that was used by their advisors and the advisors before them. This state of affairs is undesirable: social scientists conduct groundbreaking, innovative research only to analyze their results using methods that are old-fashioned or even inappropriate. This imbalance between the science and the statistics has gradually increased the pressure on the field to change the way inferences are drawn from their data. However, three requirements need to be fulfilled before social scientists are ready to adopt Bayesian tests of hypotheses. First, the Bayesian tests need to be developed for problems that social scientists work with on a regular basis; second, the Bayesian tests need to be default or objective; and, third, the Bayesian tests need to be available in a user-friendly computer program. This proposal seeks to make major progress on all three fronts.
Concretely, the projects in this proposal build on recent developments in the field of statistics and use the default Jeffreys-Zellner-Siow priors to compute Bayesian hypothesis tests for regression, correlation, the t-test, and different versions of analysis of variance (ANOVA). A similar approach will be used to develop Bayesian hypothesis tests for logistic regression and the analysis of contingency tables, as well as for popular latent process methods such as factor analysis and structural equation modeling. We aim to implement the various tests in a new computer program, Bayes-SPSS, with a similar look and feel as the frequentist spreadsheet program SPSS (i.e., Statistical Package for the Social Sciences). Together, these projects may help revolutionize the way social scientists analyze their data.
Summary
The goal of this proposal is to develop and promote Bayesian hypothesis tests for social scientists. By and large, social scientists have ignored the Bayesian revolution in statistics, and, consequently, most social scientists still assess the veracity of experimental effects using the same methodology that was used by their advisors and the advisors before them. This state of affairs is undesirable: social scientists conduct groundbreaking, innovative research only to analyze their results using methods that are old-fashioned or even inappropriate. This imbalance between the science and the statistics has gradually increased the pressure on the field to change the way inferences are drawn from their data. However, three requirements need to be fulfilled before social scientists are ready to adopt Bayesian tests of hypotheses. First, the Bayesian tests need to be developed for problems that social scientists work with on a regular basis; second, the Bayesian tests need to be default or objective; and, third, the Bayesian tests need to be available in a user-friendly computer program. This proposal seeks to make major progress on all three fronts.
Concretely, the projects in this proposal build on recent developments in the field of statistics and use the default Jeffreys-Zellner-Siow priors to compute Bayesian hypothesis tests for regression, correlation, the t-test, and different versions of analysis of variance (ANOVA). A similar approach will be used to develop Bayesian hypothesis tests for logistic regression and the analysis of contingency tables, as well as for popular latent process methods such as factor analysis and structural equation modeling. We aim to implement the various tests in a new computer program, Bayes-SPSS, with a similar look and feel as the frequentist spreadsheet program SPSS (i.e., Statistical Package for the Social Sciences). Together, these projects may help revolutionize the way social scientists analyze their data.
Max ERC Funding
1 498 286 €
Duration
Start date: 2012-05-01, End date: 2017-04-30
Project acronym ESSOG
Project Extracting science from surveys of our Galaxy
Researcher (PI) James Jeffrey Binney
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Country United Kingdom
Call Details Advanced Grant (AdG), PE9, ERC-2012-ADG_20120216
Summary "The goal is to put in place the infrastructure required to extract the promised science for large surveys of our Galaxy that are underway and will culminate in ESA's Cornerstone Mission Gaia. Dynamical models are fundamental to this process because surveys are heavily biased by the Sun's location in the Galaxy. Novel dynamical models will be built and novel methods of fitting them to the data developed. With their help we will be able to constrain the distribution of dark matter in the Galaxy. By modelling the chemical and dynamical evolution of the Galaxy we expect to be able to infer much information about how the Galaxy was assembled, and thus test the prevailing cosmological paradigm. During the grant period we will be applying our tools to ground-based surveys, but the first version of the Gaia Catalogue will become available at the end of the grant period, and our goal is to have everything ready and tested for its prompt exploitation."
Summary
"The goal is to put in place the infrastructure required to extract the promised science for large surveys of our Galaxy that are underway and will culminate in ESA's Cornerstone Mission Gaia. Dynamical models are fundamental to this process because surveys are heavily biased by the Sun's location in the Galaxy. Novel dynamical models will be built and novel methods of fitting them to the data developed. With their help we will be able to constrain the distribution of dark matter in the Galaxy. By modelling the chemical and dynamical evolution of the Galaxy we expect to be able to infer much information about how the Galaxy was assembled, and thus test the prevailing cosmological paradigm. During the grant period we will be applying our tools to ground-based surveys, but the first version of the Gaia Catalogue will become available at the end of the grant period, and our goal is to have everything ready and tested for its prompt exploitation."
Max ERC Funding
1 954 460 €
Duration
Start date: 2013-04-01, End date: 2018-03-31
Project acronym EURECA
Project Eukaryotic Regulated RNA Catabolism
Researcher (PI) Torben Heick Jensen
Host Institution (HI) AARHUS UNIVERSITET
Country Denmark
Call Details Advanced Grant (AdG), LS1, ERC-2013-ADG
Summary "Regulation and fidelity of gene expression is fundamental to the differentiation and maintenance of all living organisms. While historically attention has been focused on the process of transcriptional activation, we predict that RNA turnover pathways are equally important for gene expression regulation. This has been implied for selected protein-coding RNAs (mRNAs) but is virtually unexplored for non-protein-coding RNAs (ncRNAs).
The intention of the EURECA proposal is to establish cutting-edge research to characterize mammalian nuclear RNA turnover; its factor utility, substrate specificity and regulatory capacity. We foresee that RNA turnover is at the core of gene expression regulation - forming intricate connection to RNA productive systems – thus, being centrally placed to determine RNA fate. EURECA seeks to dramatically improve our understanding of cellular decision processes impacting RNA levels and to establish models for how regulated RNA turnover helps control key biological processes.
The realization that the number of ncRNA producing genes was previously grossly underestimated foretells that ncRNA regulation will impact on most aspects of cell biology. Consistently, aberrant ncRNA levels correlate with human disease phenotypes and RNA turnover complexes are linked to disease biology. Still, solid models for how ncRNA turnover regulate biological processes in higher eukaryotes are not available. Moreover, which ncRNAs retain function and which are merely transcriptional by-products remain a major challenge to sort out. The circumstances and kinetics of ncRNA turnover are therefore important to delineate as these will ultimately relate to the likelihood of molecular function. A fundamental challenge here is to also discern which protein complements of non-coding ribonucleoprotein particles (ncRNPs) are (in)compatible with function. Balancing single transcript/factor analysis with high-throughput methodology, EURECA will address these questions."
Summary
"Regulation and fidelity of gene expression is fundamental to the differentiation and maintenance of all living organisms. While historically attention has been focused on the process of transcriptional activation, we predict that RNA turnover pathways are equally important for gene expression regulation. This has been implied for selected protein-coding RNAs (mRNAs) but is virtually unexplored for non-protein-coding RNAs (ncRNAs).
The intention of the EURECA proposal is to establish cutting-edge research to characterize mammalian nuclear RNA turnover; its factor utility, substrate specificity and regulatory capacity. We foresee that RNA turnover is at the core of gene expression regulation - forming intricate connection to RNA productive systems – thus, being centrally placed to determine RNA fate. EURECA seeks to dramatically improve our understanding of cellular decision processes impacting RNA levels and to establish models for how regulated RNA turnover helps control key biological processes.
The realization that the number of ncRNA producing genes was previously grossly underestimated foretells that ncRNA regulation will impact on most aspects of cell biology. Consistently, aberrant ncRNA levels correlate with human disease phenotypes and RNA turnover complexes are linked to disease biology. Still, solid models for how ncRNA turnover regulate biological processes in higher eukaryotes are not available. Moreover, which ncRNAs retain function and which are merely transcriptional by-products remain a major challenge to sort out. The circumstances and kinetics of ncRNA turnover are therefore important to delineate as these will ultimately relate to the likelihood of molecular function. A fundamental challenge here is to also discern which protein complements of non-coding ribonucleoprotein particles (ncRNPs) are (in)compatible with function. Balancing single transcript/factor analysis with high-throughput methodology, EURECA will address these questions."
Max ERC Funding
2 497 960 €
Duration
Start date: 2014-04-01, End date: 2019-03-31
Project acronym GENEWELL
Project Genetics and epigenetics of animal welfare
Researcher (PI) Per Ole Stokmann Jensen
Host Institution (HI) LINKOPINGS UNIVERSITET
Country Sweden
Call Details Advanced Grant (AdG), LS9, ERC-2012-ADG_20120314
Summary Animal welfare is a topic of highest societal and scientific priority. Here, I propose to use genomic and epigenetic tools to provide a new perspective on the biology of animal welfare. This will reveal mechanisms involved in modulating stress responses. Groundbreaking aspects include new insights into how environmental conditions shape the orchestration of the genome by means of epigenetic mechanisms, and how this in turn modulates coping patterns of animals. The flexible epigenome comprises the interface between the environment and the genome. It is involved in both short- and long-term, including transgenerational, adaptations of animals. Hence, populations may adapt to environmental conditions over generations, using epigenetic mechanisms. The project will primarily be based on chickens, but will also be extended to a novel species, the dog. We will generate congenic chicken strains, where interesting alleles and epialleles will be fixed against a common background of either RJF or domestic genotypes. In these, we will apply a broad phenotyping strategy, to characterize the effects on different welfare relevant behaviors. Furthermore, we will characterize how environmental stress affects the epigenome of birds, and tissue samples from more than 500 birds from an intercross between RJF and White Leghorn layers will be used to perform an extensive meth-QTL-analysis. This will reveal environmental and genetic mechanisms affecting gene-specific methylation. The dog is another highly interesting species in the context of behavior genetics, because of its high inter-breed variation in behavior, and its compact and sequenced genome. We will set up a large-scale F2-intercross experiment and phenotype about 400 dogs in standardized behavioral tests. All individuals will be genotyped on about 1000 genetic markers, and this will be used for performing an extensive QTL-analysis in order to find new loci and alleles associated with personalities and coping patterns.
Summary
Animal welfare is a topic of highest societal and scientific priority. Here, I propose to use genomic and epigenetic tools to provide a new perspective on the biology of animal welfare. This will reveal mechanisms involved in modulating stress responses. Groundbreaking aspects include new insights into how environmental conditions shape the orchestration of the genome by means of epigenetic mechanisms, and how this in turn modulates coping patterns of animals. The flexible epigenome comprises the interface between the environment and the genome. It is involved in both short- and long-term, including transgenerational, adaptations of animals. Hence, populations may adapt to environmental conditions over generations, using epigenetic mechanisms. The project will primarily be based on chickens, but will also be extended to a novel species, the dog. We will generate congenic chicken strains, where interesting alleles and epialleles will be fixed against a common background of either RJF or domestic genotypes. In these, we will apply a broad phenotyping strategy, to characterize the effects on different welfare relevant behaviors. Furthermore, we will characterize how environmental stress affects the epigenome of birds, and tissue samples from more than 500 birds from an intercross between RJF and White Leghorn layers will be used to perform an extensive meth-QTL-analysis. This will reveal environmental and genetic mechanisms affecting gene-specific methylation. The dog is another highly interesting species in the context of behavior genetics, because of its high inter-breed variation in behavior, and its compact and sequenced genome. We will set up a large-scale F2-intercross experiment and phenotype about 400 dogs in standardized behavioral tests. All individuals will be genotyped on about 1000 genetic markers, and this will be used for performing an extensive QTL-analysis in order to find new loci and alleles associated with personalities and coping patterns.
Max ERC Funding
2 499 828 €
Duration
Start date: 2013-03-01, End date: 2018-02-28
Project acronym HETMAT
Project Heterogeneity That Matters for Trade and Welfare
Researcher (PI) Thierry Mayer
Host Institution (HI) FONDATION NATIONALE DES SCIENCES POLITIQUES
Country France
Call Details Starting Grant (StG), SH1, ERC-2012-StG_20111124
Summary Accounting for firms' heterogeneity in trade patterns is probably one of the key innovations of international trade that occurred during the last decade. The impact of initial papers such as Melitz (2003) and Bernard and Jensen (1999) is so large in the field that it is considered to have introduced a new paradigm. Apart from providing a convincing framework for a set of empirical facts, the main motivation of this literature was that there are new gains to be expected from trade liberalization. Those come from a selection process, raising aggregate productivity through the reallocation of output among heterogeneous firms. It initially seemed that the information requirements for trade policy evaluations had become much more demanding, in particular requiring detailed micro data. However, the recent work of Arkolakis et al. (2011) suggests that two aggregate ``sufficient statistics'' may be all that is needed to compute the welfare changes associated with trade liberalization. More, they show that those statistics are the same when evaluating welfare changes in representative firm models. The project has three parts. The first one starts by showing that the sufficient statistics approach relies crucially on a specific distributional assumption on heterogeneity, the Pareto distribution. When distributed non-Pareto, heterogeneity does matter, i.e. aggregate statistics are not sufficient to evaluate welfare changes and predict trade patterns. The second part of the project specifies which type of firm-level heterogeneity matters. It shows how to identify which sectors are characterized by ``productivity sorting'' and in which ones ``quality sorting'' is more relevant. Extending the analysis to multiple product firms, the third part shows that heterogeneity inside the firm also matters for welfare changes following trade shocks. It considers how the change in the product mix of the firm following trade liberalization alters the measured productivity of the firm.
Summary
Accounting for firms' heterogeneity in trade patterns is probably one of the key innovations of international trade that occurred during the last decade. The impact of initial papers such as Melitz (2003) and Bernard and Jensen (1999) is so large in the field that it is considered to have introduced a new paradigm. Apart from providing a convincing framework for a set of empirical facts, the main motivation of this literature was that there are new gains to be expected from trade liberalization. Those come from a selection process, raising aggregate productivity through the reallocation of output among heterogeneous firms. It initially seemed that the information requirements for trade policy evaluations had become much more demanding, in particular requiring detailed micro data. However, the recent work of Arkolakis et al. (2011) suggests that two aggregate ``sufficient statistics'' may be all that is needed to compute the welfare changes associated with trade liberalization. More, they show that those statistics are the same when evaluating welfare changes in representative firm models. The project has three parts. The first one starts by showing that the sufficient statistics approach relies crucially on a specific distributional assumption on heterogeneity, the Pareto distribution. When distributed non-Pareto, heterogeneity does matter, i.e. aggregate statistics are not sufficient to evaluate welfare changes and predict trade patterns. The second part of the project specifies which type of firm-level heterogeneity matters. It shows how to identify which sectors are characterized by ``productivity sorting'' and in which ones ``quality sorting'' is more relevant. Extending the analysis to multiple product firms, the third part shows that heterogeneity inside the firm also matters for welfare changes following trade shocks. It considers how the change in the product mix of the firm following trade liberalization alters the measured productivity of the firm.
Max ERC Funding
1 119 040 €
Duration
Start date: 2012-11-01, End date: 2018-07-31
Project acronym M and M
Project Generalization in Mind and Machine
Researcher (PI) jeffrey BOWERS
Host Institution (HI) UNIVERSITY OF BRISTOL
Country United Kingdom
Call Details Advanced Grant (AdG), SH4, ERC-2016-ADG
Summary Is the human mind a symbolic computational device? This issue was at the core Chomsky’s critique of Skinner in the 1960s, and motivated the debates regarding Parallel Distributed Processing models developed in the 1980s. The recent successes of “deep” networks make this issue topical for psychology and neuroscience, and it raises the question of whether symbols are needed for artificial intelligence more generally.
One of the innovations of the current project is to identify simple empirical phenomena that will serve a critical test-bed for both symbolic and non-symbolic neural networks. In order to make substantial progress on this issue a series of empirical and computational investigations are organised as follows. First, studies focus on tasks that, according to proponents of symbolic systems, require symbols for the sake of generalisation. Accordingly, if non-symbolic networks succeed, it would undermine one of the main motivations for symbolic systems. Second, studies focus on generalisation in tasks in which human performance is well characterised. Accordingly, the research will provide important constraints for theories of cognition across a range of domains, including vision, memory, and reasoning. Third, studies develop new learning algorithms designed to make symbolic systems biologically plausible. One of the reasons why symbolic networks are often dismissed is the claim that they are not as biologically plausible as non-symbolic models. This last ambition is the most high-risk but also potentially the most important: Introducing new computational principles may fundamentally advance our understanding of how the brain learns and computes, and furthermore, these principles may increase the computational powers of networks in ways that are important for engineering and artificial intelligence.
Summary
Is the human mind a symbolic computational device? This issue was at the core Chomsky’s critique of Skinner in the 1960s, and motivated the debates regarding Parallel Distributed Processing models developed in the 1980s. The recent successes of “deep” networks make this issue topical for psychology and neuroscience, and it raises the question of whether symbols are needed for artificial intelligence more generally.
One of the innovations of the current project is to identify simple empirical phenomena that will serve a critical test-bed for both symbolic and non-symbolic neural networks. In order to make substantial progress on this issue a series of empirical and computational investigations are organised as follows. First, studies focus on tasks that, according to proponents of symbolic systems, require symbols for the sake of generalisation. Accordingly, if non-symbolic networks succeed, it would undermine one of the main motivations for symbolic systems. Second, studies focus on generalisation in tasks in which human performance is well characterised. Accordingly, the research will provide important constraints for theories of cognition across a range of domains, including vision, memory, and reasoning. Third, studies develop new learning algorithms designed to make symbolic systems biologically plausible. One of the reasons why symbolic networks are often dismissed is the claim that they are not as biologically plausible as non-symbolic models. This last ambition is the most high-risk but also potentially the most important: Introducing new computational principles may fundamentally advance our understanding of how the brain learns and computes, and furthermore, these principles may increase the computational powers of networks in ways that are important for engineering and artificial intelligence.
Max ERC Funding
2 495 578 €
Duration
Start date: 2017-09-01, End date: 2022-08-31
Project acronym WATERUNDERTHEICE
Project Where is the water under the Greenland ice sheet?
Researcher (PI) Dorthe Dahl-Jensen
Host Institution (HI) KOBENHAVNS UNIVERSITET
Country Denmark
Call Details Advanced Grant (AdG), PE10, ERC-2009-AdG
Summary Recent analysis of radar-depth sounder data has shown that many areas of the Greenland ice sheet have melt water under the base. The extent of the wet base and distribution of melt water are poorly known. Also lakes under the ice have not been discovered in contrast with those in Antarctica. The effect of the water beneath the ice, however, is well documented: it lubricates the bed and removes the friction between the basal ice and underlying bedrock. The ice with a wet bed flows faster, reacts rapidly to changes in climate and the basal-melt water contributes to the fresh-water supply to the ocean from the Greenland ice sheet. The primary objectives of the project are to map melt water extent of the Greenland ice sheet and its impact by tracing internal layers and analyzing bedrock returns from airborne radio-echo sounding data, and use mapping results in conjunction with ice-sheet and hydrostatic models for the movement of the basal water to predict the ice-sheet s response to climate change. The information derived from deep ice-cores that reach the bed will be used to constrain models. We will also study the basal material (dust, DNA and microbiological material) and bedrock properties from the deep-ice core sites. This will add a further dimension to the study and provide opportunities to look for life under the ice and constrain the age of the Greenland ice sheet. The proposed research is a high risk project because of the difficulty in accessing basal conditions under 3-km of ice with a potential for high payoff science. The team will consist of scientists and engineers with expertise in the palaeoclimate, radar sounding and signal processing, and ice-sheet models.
Summary
Recent analysis of radar-depth sounder data has shown that many areas of the Greenland ice sheet have melt water under the base. The extent of the wet base and distribution of melt water are poorly known. Also lakes under the ice have not been discovered in contrast with those in Antarctica. The effect of the water beneath the ice, however, is well documented: it lubricates the bed and removes the friction between the basal ice and underlying bedrock. The ice with a wet bed flows faster, reacts rapidly to changes in climate and the basal-melt water contributes to the fresh-water supply to the ocean from the Greenland ice sheet. The primary objectives of the project are to map melt water extent of the Greenland ice sheet and its impact by tracing internal layers and analyzing bedrock returns from airborne radio-echo sounding data, and use mapping results in conjunction with ice-sheet and hydrostatic models for the movement of the basal water to predict the ice-sheet s response to climate change. The information derived from deep ice-cores that reach the bed will be used to constrain models. We will also study the basal material (dust, DNA and microbiological material) and bedrock properties from the deep-ice core sites. This will add a further dimension to the study and provide opportunities to look for life under the ice and constrain the age of the Greenland ice sheet. The proposed research is a high risk project because of the difficulty in accessing basal conditions under 3-km of ice with a potential for high payoff science. The team will consist of scientists and engineers with expertise in the palaeoclimate, radar sounding and signal processing, and ice-sheet models.
Max ERC Funding
2 499 999 €
Duration
Start date: 2010-01-01, End date: 2015-12-31