Skip to main content
← All Series

The Learning Problem

Essays on induction, inference, and the search for useful representations

7 parts

How do you learn anything at all?

Solomonoff induction tells you the optimal way: consider all hypotheses, weight by simplicity, update on evidence. It is mathematically beautiful. It is also incomputable.

Every practical learning algorithm is an approximation. And every approximation encodes assumptions about what patterns are likely, what representations are useful, what search strategies will find good solutions. These assumptions are priors. They are the maps we use to navigate hypothesis space.

The Idea

These essays explore one thing from multiple angles: learning is constrained search, and the constraints shape what gets learned.

  • All induction is Bayesian inference with different knobs
  • The simplest learning is impossible, forcing approximations
  • Those approximations (priors, architectures, objectives) shape the resulting intelligence
  • Scale plus simple algorithms beats clever engineering, but you still need the right inductive biases

The Arc

  1. Theory: Why all induction reduces to the same framework
  2. Incomputability: Why we are forced into approximations
  3. Memory: How accumulated experience becomes learned priors
  4. Value: How systems learn what is useful, not just what is correct
  5. Search: How tree search navigates reasoning space
  6. Agents: How optimization pressure shapes emergent behavior

Posts in this Series

Showing 7 of 7 posts