Computes the Wald test statistic and p-value for testing whether a parameter (or parameter vector) equals a hypothesized value.
wald_test(estimate, se = NULL, vcov = NULL, null_value = 0)Numeric. The estimated parameter value \(\hat{\theta}\). A scalar for the univariate case or a vector for the multivariate case.
Numeric. Standard error of the estimate for the univariate case.
Mutually exclusive with vcov.
Numeric matrix. Variance-covariance matrix for the multivariate
case. Mutually exclusive with se.
Numeric. The hypothesized value \(\theta_0\) under the
null hypothesis. Default is 0. A scalar for the univariate case or a
vector of the same length as estimate for the multivariate case.
A hypothesis_test object of subclass wald_test containing:
The Wald statistic \(W\)
Two-sided p-value from chi-squared distribution
Degrees of freedom (1 for univariate, \(k\) for multivariate)
The z-score (univariate case only)
The input estimate
The input standard error (univariate case only)
The input variance-covariance matrix (multivariate case only)
The input null hypothesis value
The Wald test is a fundamental tool in statistical inference, used to test the null hypothesis \(H_0: \theta = \theta_0\) against the alternative \(H_1: \theta \neq \theta_0\).
Univariate case (when se is provided):
The test is based on the asymptotic normality of maximum likelihood
estimators. Under regularity conditions, if \(\hat{\theta}\) is the MLE
with standard error \(SE(\hat{\theta})\), then:
$$z = \frac{\hat{\theta} - \theta_0}{SE(\hat{\theta})} \sim N(0, 1)$$
The Wald statistic is reported as \(W = z^2\), which follows a chi-squared distribution with 1 degree of freedom under \(H_0\). The z-score is stored in the returned object for reference.
Multivariate case (when vcov is provided):
For a \(k\)-dimensional parameter vector \(\hat{\theta}\) with
variance-covariance matrix \(\Sigma\), the Wald statistic is:
$$W = (\hat{\theta} - \theta_0)' \Sigma^{-1} (\hat{\theta} - \theta_0) \sim \chi^2(k)$$
The p-value is computed as \(P(\chi^2_k \geq W)\).
The Wald test is one of the "holy trinity" of likelihood-based tests,
alongside the likelihood ratio test (lrt()) and the score test.
For large samples, all three are asymptotically equivalent, but they
can differ substantially in finite samples.
# Univariate: test whether a regression coefficient differs from zero
w <- wald_test(estimate = 2.5, se = 0.8, null_value = 0)
w
#> Hypothesis test (wald_test)
#> -----------------------------
#> Test statistic: 9.765625
#> P-value: 0.00177805059821686
#> Degrees of freedom: 1
#> Significant at 5% level: TRUE
# Extract components
test_stat(w) # Wald statistic (chi-squared)
#> [1] 9.765625
w$z # z-score
#> [1] 3.125
pval(w) # p-value
#> [1] 0.001778051
is_significant_at(w, 0.05)
#> [1] TRUE
# Test against a non-zero null
wald_test(estimate = 2.5, se = 0.8, null_value = 2)
#> Hypothesis test (wald_test)
#> -----------------------------
#> Test statistic: 0.390625
#> P-value: 0.531971058097401
#> Degrees of freedom: 1
#> Significant at 5% level: FALSE
# Multivariate: test two parameters jointly
est <- c(2.0, 3.0)
V <- matrix(c(1.0, 0.3, 0.3, 1.0), 2, 2)
w_mv <- wald_test(estimate = est, vcov = V, null_value = c(0, 0))
test_stat(w_mv)
#> [1] 10.32967
dof(w_mv) # 2
#> [1] 2
pval(w_mv)
#> [1] 0.005714005