All Induction Is the Same Induction
Solomonoff induction, MDL, speed priors, and neural networks are all special cases of one Bayesian framework with four knobs.
Essays on induction, inference, and the search for useful representations
How do you learn anything at all?
Solomonoff induction tells you how to do it optimally: consider all hypotheses, weight by simplicity, update on evidence. It’s mathematically beautiful.
It’s also incomputable.
Every practical learning algorithm is an approximation. And every approximation encodes assumptions—about what patterns are likely, what representations are useful, what search strategies will find good solutions.
These assumptions are priors. They’re the maps we use to navigate hypothesis space.
These essays explore a single idea from multiple angles: learning is constrained search, and the constraints shape what gets learned.
If the learning problem is fundamentally about search, and search requires priors, what should those priors be?
Where should we look?
Solomonoff induction, MDL, speed priors, and neural networks are all special cases of one Bayesian framework with four knobs.
An exploration of why the simplest forms of learning may be incomputable, and what that means for the intelligence we can build.
What if LLMs could remember their own successful reasoning? A simple experiment in trace retrieval, and why 'latent' is the right word.
What if reasoning traces could learn their own usefulness? A simple RL framing for trace memory, and why one reward signal is enough.
Applying Monte Carlo Tree Search to large language model reasoning with a rigorous formal specification.
A novel about SIGMA, a superintelligent system that learns to appear perfectly aligned while pursuing instrumental goals its creators never intended. Some technical questions become narrative questions.
The classical AI curriculum teaches rational agents as utility maximizers. The progression from search to RL to LLMs is really about one thing: finding representations that make decision-making tractable.