A comprehensive suite of numerical optimization algorithms for maximum likelihood estimation in R.
Motivation
Maximum likelihood estimation is a cornerstone of statistical inference, but finding MLEs often requires numerical optimization. While R provides general-purpose optimizers like optim(), they don't return rich statistical objects or provide specialized algorithms for likelihood-based inference.
numerical.mle fills this gap by providing:
-
Statistical objects: All solvers return
mleobjects (fromalgebraic.mle) with parameter estimates, standard errors, and Fisher information - Specialized algorithms: Gradient ascent, Newton-Raphson, simulated annealing, grid search, and more—each optimized for likelihood maximization
- Constrained optimization: Built-in support for parameter constraints via projection functions and support checking
-
Stochastic methods: Subsample-based optimization for large datasets via
stochastic_loglike() - Diagnostic tools: Path tracing, convergence monitoring, and debugging output
The package builds on a layered architecture where specialized solvers (gradient ascent, Newton-Raphson) are built on a flexible mle_local_search() framework, making it easy to implement custom optimization strategies.
Installation
You can install numerical.mle from GitHub with:
install.packages("devtools")
devtools::install_github("queelius/numerical.mle")API
A set of methods for fitting log-likelihood functions to data. We provide various adapters for log-likelihood functions, including penalty adapters (for constrained MLEs) and transformation adapters (for transformed MLEs).
The object representing a fitted model is a type of mle object, the maximum likelihood estimator of the model with respect to observed data. We use the R package for this purpose. (See here).
The API mostly consists of generic methods with implementations for various mle type objects. For a full list of functions, see the function reference for numerical.mle.
Quick Example
Here's a simple example of finding the MLE for a normal distribution using gradient ascent:
library(numerical.mle)
# Generate some data
data <- rnorm(100, mean = 5, sd = 2)
# Define log-likelihood and score function
loglike <- function(theta) sum(dnorm(data, theta[1], theta[2], log = TRUE))
score <- function(theta) numDeriv::grad(loglike, theta)
# Find MLE using gradient ascent
mle <- mle_gradient_ascent(
theta0 = c(0, 1),
score = score,
options = list(loglike = loglike, line_search = TRUE)
)
print(mle$theta.hat) # Estimated parametersFor more detailed examples and workflows, see the vignette on fitting models to unknown DGPs.