Applies a multiple testing correction to a hypothesis test or vector of tests, returning adjusted test object(s).

adjust_pval(x, method = "bonferroni", n = NULL)

Arguments

x

A hypothesis_test object, or a list of such objects.

method

Character. Adjustment method (see Details). Default is "bonferroni".

n

Integer. Total number of tests in the family. If x is a list, defaults to length(x). For a single test, this must be specified.

Value

For a single test: a hypothesis_test object of subclass adjusted_test with the adjusted p-value. For a list of tests: a list of adjusted test objects.

The returned object contains:

stat

Original test statistic (unchanged)

p.value

Adjusted p-value

dof

Original degrees of freedom (unchanged)

adjustment_method

The method used

original_pval

The unadjusted p-value

n_tests

Number of tests in the family

Details

When performing multiple hypothesis tests, the probability of at least one false positive (Type I error) increases. Multiple testing corrections adjust p-values to control error rates across the family of tests.

This function demonstrates the higher-order function pattern: it takes a hypothesis test as input and returns a transformed hypothesis test as output. The adjusted test retains all original properties but with a corrected p-value.

Available Methods

The method parameter accepts any method supported by stats::p.adjust():

"bonferroni"

Multiplies p-values by \(n\). Controls family-wise error rate (FWER). Conservative.

"holm"

Step-down Bonferroni. Controls FWER. Less conservative than Bonferroni while maintaining strong control.

"BH" or "fdr"

Benjamini-Hochberg procedure. Controls false discovery rate (FDR). More powerful for large-scale testing.

"hochberg"

Step-up procedure. Controls FWER under independence.

"hommel"

More powerful than Hochberg but computationally intensive.

"BY"

Benjamini-Yekutieli. Controls FDR under arbitrary dependence.

"none"

No adjustment (identity transformation).

Higher-Order Function Pattern

This function exemplifies transforming hypothesis tests:

adjust_pval : hypothesis_test -> hypothesis_test

The output can be used with all standard generics (pval(), test_stat(), is_significant_at(), etc.) and can be further composed.

See also

stats::p.adjust() for the underlying adjustment, fisher_combine() for combining (not adjusting) p-values

Examples

# Single test adjustment (must specify n)
w <- wald_test(estimate = 2.0, se = 0.8)
pval(w)  # Original p-value
#> [1] 0.01241933

w_adj <- adjust_pval(w, method = "bonferroni", n = 10)
pval(w_adj)  # Adjusted (multiplied by 10, capped at 1)
#> [1] 0.1241933
w_adj$original_pval  # Can still access original
#> [1] 0.01241933

# Adjusting multiple tests at once
tests <- list(
  wald_test(estimate = 2.5, se = 0.8),
  wald_test(estimate = 1.2, se = 0.5),
  wald_test(estimate = 0.8, se = 0.9)
)

# BH (FDR) correction - n is inferred from list length
adjusted <- adjust_pval(tests, method = "BH")
sapply(adjusted, pval)  # Adjusted p-values
#> [1] 0.005334152 0.024592608 0.374062797

# Compare methods
sapply(tests, pval)  # Original
#> [1] 0.001778051 0.016395072 0.374062797
sapply(adjust_pval(tests, method = "bonferroni"), pval)  # Conservative
#> [1] 0.005334152 0.049185216 1.000000000
sapply(adjust_pval(tests, method = "BH"), pval)  # Less conservative
#> [1] 0.005334152 0.024592608 0.374062797