I have a fairly broad interest in problem-solving, from problems in statistics to algorithms. Over the years, I’ve accumulated a collection of problem sets from graduate coursework and independent study. These represent solutions to challenging problems in computational statistics, numerical methods, and algorithm design.
Overview
The problem sets cover several courses I took during my mathematics master’s program at SIUe, focusing on computational methods and statistical computing. While I don’t claim expertise in all areas, working through these problems deepened my understanding of how statistical methods actually work under the hood—beyond just calling library functions.
STAT 575: Computational Statistics
This was a particularly valuable course taught by Dr. Qiang Beidi during Summer 2021. The problem sets covered:
Numerical Methods:
- Newton-Raphson method for root finding
- Secant method and convergence analysis
- Multivariate optimization techniques
- Numerical derivatives (Jacobian and Hessian computation)
Statistical Computing:
- Poisson regression with hand-coded Newton-Raphson
- Maximum likelihood estimation
- Comparison with built-in GLM implementations
Sampling and Simulation:
- Inverse transform method for logistic and Cauchy distributions
- Acceptance-rejection sampling for custom distributions
- Generating from geometric distributions via Bernoulli trials
- Sampling from bimodal and half-normal distributions
The problem sets required implementing these methods from scratch in R, which forced a deep understanding of the underlying mathematics. For example, fitting a Poisson regression model by hand-coding the Newton-Raphson algorithm—computing gradients and Hessians numerically—teaches you far more than just calling glm().
STAT 482: Additional Topics
This course covered complementary material in statistical methods and applications, with problem sets focused on practical implementation and theoretical understanding.
Why This Matters
There’s value in implementing statistical methods from scratch:
Understanding tradeoffs: When you code Newton’s method yourself, you understand why starting values matter and when convergence fails.
Debugging intuition: If a built-in function gives unexpected results, you can reason about what might be going wrong internally.
Foundation for research: When you need to extend existing methods or develop new ones, understanding the computational substrate is essential.
Appreciating library implementations: After hand-coding these algorithms, you gain respect for well-optimized statistical software.
During my cancer treatment and master’s program, working through these problem sets was a way to stay engaged with concrete problems when larger research questions felt overwhelming. There’s something therapeutic about a well-defined problem with a clear solution.
Accessing the Problem Sets
You can find these problem sets organized by course at /probsets. Each includes detailed solutions with code implementations in R.
The solutions show full working—mathematical derivations, implementation details, and verification against built-in functions. They’re written for someone trying to understand not just what works but why it works.
These represent coursework from 2021, during my mathematics master’s program at SIUe.
Discussion