Combines p-values from independent hypothesis tests into a single omnibus test using Fisher's method.
fisher_combine(...)A hypothesis_test object of subclass fisher_combined_test
containing:
Fisher's chi-squared statistic \(-2\sum\log(p_i)\)
P-value from \(\chi^2_{2k}\) distribution
Degrees of freedom (\(2k\))
Number of tests combined
Vector of the individual p-values
Fisher's method is a meta-analytic technique for combining evidence from multiple independent tests of the same hypothesis (or related hypotheses). It demonstrates a key principle: combining hypothesis tests yields a hypothesis test (the closure property).
Given \(k\) independent p-values \(p_1, \ldots, p_k\), Fisher's statistic is:
$$X^2 = -2 \sum_{i=1}^{k} \log(p_i)$$
Under the global null hypothesis (all individual nulls are true), this follows a chi-squared distribution with \(2k\) degrees of freedom.
If \(p_i\) is a valid p-value under \(H_0\), then \(p_i \sim U(0,1)\). Therefore \(-2\log(p_i) \sim \chi^2_2\). The sum of independent chi-squared random variables is also chi-squared with summed degrees of freedom, giving \(X^2 \sim \chi^2_{2k}\).
A significant combined p-value indicates that at least one of the individual null hypotheses is likely false, but does not identify which one(s). Fisher's method is sensitive to any deviation from the global null, making it powerful when effects exist but liberal when assumptions are violated.
This function exemplifies the closure property from SICP: the operation
of combining hypothesis tests produces another hypothesis test. The result
can be further combined, adjusted, or analyzed using the same generic
methods (pval(), test_stat(), is_significant_at(), etc.).
adjust_pval() for multiple testing correction (different goal)
# Scenario: Three independent studies test the same drug effect
# Study 1: p = 0.08 (trend, not significant)
# Study 2: p = 0.12 (not significant)
# Study 3: p = 0.04 (significant at 0.05)
# Combine using raw p-values
combined <- fisher_combine(0.08, 0.12, 0.04)
combined
#> Hypothesis test ( fisher_combined_test )
#> -----------------------------
#> Test statistic: 15.72974
#> P-value: 0.01528049
#> Degrees of freedom: 6
#> Significant at 5% level: TRUE
is_significant_at(combined, 0.05) # Stronger evidence together
#> [1] TRUE
# Or combine hypothesis_test objects directly
t1 <- wald_test(estimate = 1.5, se = 0.9)
t2 <- wald_test(estimate = 0.8, se = 0.5)
t3 <- z_test(rnorm(30, mean = 0.3), mu0 = 0, sigma = 1)
fisher_combine(t1, t2, t3)
#> Hypothesis test ( fisher_combined_test )
#> -----------------------------
#> Test statistic: 20.14234
#> P-value: 0.002612353
#> Degrees of freedom: 6
#> Significant at 5% level: TRUE
# The result is itself a hypothesis_test, so it composes
# (though combining non-independent tests is invalid)