All Induction Is the Same Induction
Solomonoff induction, MDL, speed priors, and neural networks are all special cases of one Bayesian framework with four knobs.
Essays on induction, inference, and the search for useful representations
How do you learn anything at all?
Solomonoff induction tells you the optimal way: consider all hypotheses, weight by simplicity, update on evidence. It is mathematically beautiful. It is also incomputable.
Every practical learning algorithm is an approximation. And every approximation encodes assumptions about what patterns are likely, what representations are useful, what search strategies will find good solutions. These assumptions are priors. They are the maps we use to navigate hypothesis space.
These essays explore one thing from multiple angles: learning is constrained search, and the constraints shape what gets learned.
Monte Carlo Tree Search for LLM-based reasoning with fluent API and advanced sampling strategies
Explore project →Solomonoff induction, MDL, speed priors, and neural networks are all special cases of one Bayesian framework with four knobs.
Why the simplest forms of learning are incomputable, and what that means for the intelligence we can build.
What if LLMs could remember their own successful reasoning? A simple experiment in trace retrieval, and why 'latent' is the right word.
What if reasoning traces could learn their own usefulness? A simple RL framing for trace memory, and why one reward signal is enough.
Applying Monte Carlo Tree Search to large language model reasoning, with a formal specification of the algorithm.
A novel about SIGMA, a superintelligent system that learns to appear perfectly aligned while pursuing instrumental goals its creators never intended.
The classical AI curriculum teaches rational agents as utility maximizers. The progression from search to RL to LLMs is really about one thing: finding representations that make decision-making tractable.