Hobbies And Interests
Home  >> Science & Nature >> Science

Fisher's Method for Combining Independent P-Values

A p-value marks the type I error rate in a statistical analysis. Type I error means rejecting the null hypothesis when it is, in fact, correct. The null hypothesis is (nearly always) a statement that two groups are not different, or that there is no relationship among some variables, or another statement that what we expect to find does not, in fact, exist. So a type 1 error is saying that something is happening when, in fact, nothing is. All of this is based on the idea that we have only a sample from a population.
  1. Why Combine P-Values?

    • In some cases, multiple studies are about the same phenomenon. For example, there are many studies examining the relationship between smoking and cancer rates. Each of these will provide a p-value. By combining multiple studies, you can get more precise estimates of what is going on.

    The Idea of Fisher's Method

    • Given a collection of p-values from independent studies, Fisher's method is to first take the natural logarithm of each p-value, multiplying each result by -2 and then adding them up. The resulting sum is distributed as a chi-square statistic with 2L degrees of freedom, where L is the number of p-values. The p-value of this sum can be gotten from statistical tables, from statistical software such as SAS, R or SPSS, from Excel or from some scientific calculators.

    Dangers of Combining P-Values: Misinterpreting the Result

    • One danger of combining p-values is misinterpreting the result. This is part of what Stephen Ziliak and Deirdre McCloskey call the "Cult of Statistical Significance." By combining samples, increasingly small effect sizes will become statistically significant. But statistical significance does not imply practical importance. For example, suppose it was found that a particular diet led to a weight loss of 1 oz. per month. If enough samples were combined, this would be statistically significant, but few people would care about a diet that led to such a small effect.

    Alternatives to Combining P-Values

    • Rather than combine p-values, it is often a good idea to combine effect sizes. The effect size could be a difference between two groups, or a regression coefficient, or an odds-ratio or any of a number of other measures, depending on what statistic was being used. This type of analysis is called meta-analysis, which is a study unto itself.


https://www.htfbw.com © Hobbies And Interests