Creates a solver using the BFGS quasi-Newton method via optim().
BFGS approximates the Hessian from gradient information, providing
second-order-like convergence without computing the Hessian directly.
Details
BFGS is often a good default choice: it's more robust than Newton-Raphson (no matrix inversion issues) and faster than gradient ascent (uses curvature information).
The solver automatically uses the score function from the problem if available, otherwise computes gradients numerically.
Examples
if (FALSE) { # \dontrun{
# Basic usage
result <- bfgs()(problem, c(0, 1))
# Race BFGS against gradient ascent
strategy <- bfgs() %|% gradient_ascent()
} # }