Applies a multiple testing correction to a hypothesis test or vector of tests, returning adjusted test object(s).
adjust_pval(x, method = "bonferroni", n = NULL)For a single test: a hypothesis_test object of subclass
adjusted_test with the adjusted p-value. For a list of tests: a list
of adjusted test objects.
The returned object contains:
Original test statistic (unchanged)
Adjusted p-value
Original degrees of freedom (unchanged)
The method used
The unadjusted p-value
Number of tests in the family
When performing multiple hypothesis tests, the probability of at least one false positive (Type I error) increases. Multiple testing corrections adjust p-values to control error rates across the family of tests.
This function demonstrates the higher-order function pattern: it takes a hypothesis test as input and returns a transformed hypothesis test as output. The adjusted test retains all original properties but with a corrected p-value.
The method parameter accepts any method supported by stats::p.adjust():
"bonferroni"Multiplies p-values by \(n\). Controls family-wise error rate (FWER). Conservative.
"holm"Step-down Bonferroni. Controls FWER. Less conservative than Bonferroni while maintaining strong control.
"BH" or "fdr"Benjamini-Hochberg procedure. Controls false discovery rate (FDR). More powerful for large-scale testing.
"hochberg"Step-up procedure. Controls FWER under independence.
"hommel"More powerful than Hochberg but computationally intensive.
"BY"Benjamini-Yekutieli. Controls FDR under arbitrary dependence.
"none"No adjustment (identity transformation).
This function exemplifies transforming hypothesis tests:
The output can be used with all standard generics (pval(), test_stat(),
is_significant_at(), etc.) and can be further composed.
stats::p.adjust() for the underlying adjustment,
fisher_combine() for combining (not adjusting) p-values
# Single test adjustment (must specify n)
w <- wald_test(estimate = 2.0, se = 0.8)
pval(w) # Original p-value
#> [1] 0.01241933
w_adj <- adjust_pval(w, method = "bonferroni", n = 10)
pval(w_adj) # Adjusted (multiplied by 10, capped at 1)
#> [1] 0.1241933
w_adj$original_pval # Can still access original
#> [1] 0.01241933
# Adjusting multiple tests at once
tests <- list(
wald_test(estimate = 2.5, se = 0.8),
wald_test(estimate = 1.2, se = 0.5),
wald_test(estimate = 0.8, se = 0.9)
)
# BH (FDR) correction - n is inferred from list length
adjusted <- adjust_pval(tests, method = "BH")
sapply(adjusted, pval) # Adjusted p-values
#> [1] 0.005334152 0.024592608 0.374062797
# Compare methods
sapply(tests, pval) # Original
#> [1] 0.001778051 0.016395072 0.374062797
sapply(adjust_pval(tests, method = "bonferroni"), pval) # Conservative
#> [1] 0.005334152 0.049185216 1.000000000
sapply(adjust_pval(tests, method = "BH"), pval) # Less conservative
#> [1] 0.005334152 0.024592608 0.374062797