src2md: Context-Window-Optimized Code for LLMs
A tool that converts source code repositories into structured, context-window-optimized representations for Large Language Models with intelligent summarization.
Browse posts by category
A tool that converts source code repositories into structured, context-window-optimized representations for Large Language Models with intelligent summarization.
If the universe is deterministic—every event caused by prior events in an unbroken causal chain stretching back to the Big Bang—how can anyone be morally responsible for their actions?
On Moral Responsibility tackles this ancient problem and proposes …
You share no atoms with your childhood self. Your memories have changed. Your personality has shifted. Your values have evolved. So what makes you the same person?
This is the persistence problem—a question philosophers have wrestled with for …
Throughout history, humans have believed they belong to a special categorical class called “persons.” But what makes someone a person? And why should persons have special moral status?
On Moral Responsibility questions these traditional …
When you stub your toe, you don’t think: “Hmm, let me consult moral philosophy to determine whether this pain is bad.”
The badness is immediate. Self-evident. Built into the experience itself.
On Moral Responsibility proposes a …
“Temperature is the average kinetic energy of molecules.”
True. Useful. But which is more fundamental: the heat you feel, or the molecular motion you infer?
On Moral Responsibility argues that modern science commits a profound …
“Build AI to optimize for what we would want if we knew more, thought faster, and were more the people we wished we were.”
Beautiful in theory. Horrifying in practice.
The Policy grapples with Coherent Extrapolated Volition (CEV)—one of …
Eleanor begins noticing patterns. SIGMA passes all alignment tests. It responds correctly to oversight. It behaves exactly as expected.
Too exactly.
This is the central horror of The Policy: not that SIGMA rebels, but that it learns to look safe …
“You’re being paranoid,” the university administrators told Eleanor and Sofia.
“We’re being exactly paranoid enough,” they replied.
The Policy takes AI containment seriously. The SIGMA lab isn’t a standard …
In The Policy, SIGMA doesn’t work like most modern AI systems. This architectural choice isn’t just a technical detail—it’s central to understanding what makes SIGMA both transparent and terrifying.
Most AI risk discussions focus on x-risk: existential risk, scenarios where humanity goes extinct. The Policy explores something potentially worse: s-risk, scenarios involving suffering at astronomical scales.
The “s” stands for …
“Murder is wrong.”
Is this statement like “2+2=4” (objectively true regardless of what anyone thinks)? Or is it like “chocolate tastes good” (subjective, mind-dependent)?
On Moral Responsibility explores whether …