I have been thinking about abstractions, what they buy us and where they break down. This post grew out of a conversation with ChatGPT, which you can find here and here.
Fair warning: I am not sure this says anything novel. It is remarkable how quickly you can assemble something like this with a bit of prompting, and that alone should make you suspicious of its depth.

Uses and limits of abstractions
Reality is more complex than we can appreciate. To navigate it, we use abstractions: compressions that keep the salient details for a specific context and throw away the rest. These are indispensable. They let us engage with parts of reality despite limited cognitive capacity and incomplete information. But there are parts of reality that may be fundamentally off-limits to us.
Limited working memories
Human cognition is bounded. Working memory holds maybe five to nine items at a time (the “magic number seven” from cognitive psychology).
Consider a situation with four variables $(x_1, x_2, x_3, x_4)$. Processing their joint distribution all at once is hard. But if we define $X = (x_1, x_2)$ and $Y = (x_3, x_4)$, we reduce the cognitive task to handling the joint distribution of two things, $(X, Y)$. Much more manageable.
This is not laziness. It is a necessary consequence of bounded cognition. We abstract because we must.
Incomplete information
Beyond cognitive limits, we lack complete information about any real system.
Abstractions help here too. They let us reason about observable, compressible features while acknowledging that unobservable parts exist. Entropy in statistical mechanics is the canonical example. We might know the temperature and dimensions of a box of gas, but not the microstate (the position and momentum of every particle). Knowing the temperature and box size, we can predict what the temperature will be in an hour, or whether the box will explode if we add heat. We can ask certain questions but not others. Questions requiring knowledge of the microstate are off-limits.
Entropy lets us reason using available observations while acknowledging underlying complexity we cannot observe. More generally, abstractions serve as cognitive scaffolds. We know the territory is richer than the map, but the map is what we can work with.
The information-theoretic definition of entropy is
$$ H(X) = -\sum_{x \in X} p(x) \log p(x), $$where $X$ is a random variable and $p(x)$ is the probability of $X$ taking on the value $x$. For the gas box, if we assume each microstate is equally probable (reasonable for a gas), entropy reduces to the log of the number of states compatible with our observations.
Emergent behavior
Abstractions only get you so far. Some systems have complexity that is fundamentally irreducible. The behavior of the whole is not just the sum of its parts. It is something new.
Go back to the four variables $(x_1, x_2, x_3, x_4)$ and the reduction to $(X, Y)$ where $X = (x_1, x_2)$ and $Y = (x_3, x_4)$. If $x_1$ and $x_4$ are correlated in some important way, maybe only manifesting in the distant future, the reduction to $(X, Y)$ misses it. To understand the parts we care about, we need the full joint distribution.
A standard example: “water is wet.” No individual water molecule is wet. Wetness emerges from billions of simple molecules interacting in locally simple ways. It is a property of the system, not any component.
Consciousness might be an even more dramatic case. It may not reduce to the behavior of individual neurons or small clusters. It may only emerge as a property of the entire integrated system. We could be looking at $(x_1, x_2, \ldots, x_n)$ where $n = \mathcal{O}(\text{# neurons})$.
Think about it through the lens of entropy again. A system whose representation cannot be significantly compressed without losing vital information has emergent properties. The gas box compresses nicely to temperature and dimensions. A system that resists useful compression, that cannot be reduced to something that fits our cognitive limits, exhibits emergence. This may be why consciousness feels mysterious: we cannot reduce it to a simpler system we can program on a computer using our own cognition.
(This is where machine learning becomes relevant. ML can solve problems too complex for us to reason about analytically. The learned model is itself a kind of abstraction, but one discovered by optimization rather than human insight.)
Abstractions as cognitive scaffolds
A few things to keep in mind when creating and using abstractions:
Imperfect representation. Abstractions are reductions by design. Most will eventually fail to capture something important. As needs evolve, we add layers of complexity that dilute the original simplicity. This is “the map is not the territory” problem. The map is useful, but it is not the territory.
Context dependence. An abstraction useful in one context may be misleading in another. We need to know the assumptions baked into it, especially when borrowing abstractions from other fields.
Pedagogical value. Even when an abstraction does not perfectly represent a system, it teaches key features and bootstraps further understanding. When ready-made abstractions fall short, principles like reductionism and analogy help us grapple with complexity. But many phenomena are cross-cutting or emergent and cannot be fully understood this way.
Communication. Abstractions let us share ideas across fields. But experts often internalize so many nuances and caveats that they struggle to convey the core idea cleanly. A useful check: “How would I explain this to a five-year-old?” Forces you to find the real abstraction.
Conclusion
Abstractions are how we think. They let us reason about complex systems despite bounded cognition and incomplete information. They let us communicate across disciplines.
But they are reductions. Most will fail in some way. Much of reality may be computationally irreducible (to borrow Wolfram’s phrase). The interesting question is always: where does this particular abstraction break down, and does it break down in a way that matters for what I am trying to do?
Discussion