Seeing Structure First
A reflection on eleven explorations in generic programming—how algorithms arise from algebraic structure.
A series exploring Alex Stepanov's insight that algorithms arise from algebraic structure
These posts explore the philosophy and practice of generic programming as taught by Alex Stepanov, architect of the C++ Standard Template Library.
The central insight: algorithms arise from algebraic structure. The same algorithm that works on integers works on matrices, polynomials, and modular integers—not by accident, but because they share algebraic properties. When you recognize “this is a monoid” or “this is a Euclidean domain,” you immediately know which algorithms apply.
Each post demonstrates one beautiful idea with minimal, pedagogical C++ code (~100-300 lines). The implementations prioritize clarity over cleverness—code that reads like a textbook that happens to compile.
Pedagogical blog posts on generic programming in C++, inspired by Alex Stepanov
Explore project →A reflection on eleven explorations in generic programming—how algorithms arise from algebraic structure.
The Russian peasant algorithm teaches us that one algorithm can compute products, powers, Fibonacci numbers, and more—once we see the underlying algebraic structure.
Integers modulo N form a ring—an algebraic structure that determines which algorithms apply. Understanding this structure unlocks algorithms from cryptography to competitive programming.
The Miller-Rabin primality test demonstrates how probabilistic algorithms can achieve arbitrary certainty, trading absolute truth for practical efficiency.
Rational numbers give us exact arithmetic where floating-point fails. The implementation reveals deep connections to GCD, the Stern-Brocot tree, and the algebraic structure of fields.
The same GCD algorithm works for integers and polynomials because both are Euclidean domains. This profound insight shows how algebraic structure determines algorithmic applicability.
elementa is a pedagogical linear algebra library where every design decision prioritizes clarity over cleverness—code that reads like a textbook that happens to compile.
Dual numbers extend our number system with an infinitesimal epsilon where epsilon^2 = 0. Evaluating f(x + epsilon) yields f(x) + epsilon * f'(x)—the derivative emerges automatically from the algebra.
The art of numerical differentiation lies in choosing step size h wisely—small enough that the approximation is good, but not so small that floating-point errors dominate.
Reverse-mode automatic differentiation powers modern machine learning. Understanding how it works demystifies PyTorch, JAX, and TensorFlow—it's just the chain rule applied systematically.
Numerical integration demonstrates both classical numerical analysis and Stepanov's philosophy: by identifying the minimal algebraic requirements, our quadrature routines work with dual numbers for automatic differentiation under the integral.
Sean Parent's type erasure technique provides value-semantic polymorphism without inheritance. Combined with Stepanov's algebraic thinking, we can type-erase entire algebraic structures.
Three approaches to computing derivatives—forward-mode AD, reverse-mode AD, and finite differences—each with different trade-offs. Understanding when to use each is essential for numerical computing and machine learning.
Many structures come in pairs: forward/reverse AD, push/pull iteration, encode/decode. Recognizing duality lets you transfer theorems and insights between domains.