Most survival analysis forces you to pick from a catalog—Weibull, exponential, log-normal. dfr.dist flips this: you specify the hazard function directly, and it handles all the math.
The Core Insight
Instead of choosing Weibull(shape, scale), you write:
h <- function(t, x) exp(b0 + b1*x + b2*t) # Your hazard function
model <- dfr_dist(hazard = h)
The package computes survival functions, cumulative hazards, quantiles, and sampling—all from your custom hazard.
Why This Matters
1. No Distributional Constraints
Want a bathtub curve? Multiple peaks? Time-varying effects? Just write the hazard function. No need to force reality into exponential/Weibull boxes.
2. Covariate Flexibility
Your hazard can depend on any covariates:
h <- function(t, age, treatment) {
baseline * exp(beta_age*age + beta_tx*treatment + gamma*t)
}
3. Integrates with MLE Stack
Works seamlessly with algebraic.mle for parameter estimation and likelihood.model for likelihood contributions.
Two Probabilistic Constraints
Your hazard just needs to satisfy:
- Non-negative:
h(t, x) ≥ 0for all t, x - Eventual failure: cumulative hazard → ∞ as t → ∞
That’s it. The package handles the rest.
Connection to My Thesis
This generalizes my thesis work on masked failure data—where I used Weibull/exponential distributions. With dfr.dist, you’re not limited to parametric families. You specify the true failure mechanism, and the math adapts.
R package • Works with algebraic.mle • Documentation • GitHub
Discussion