SLUUG Talk: Demystifying Large Language Models on Linux
Talk for the St. Louis Unix Users Group about running and understanding Large Language Models on Linux.
Browse posts by tag
Talk for the St. Louis Unix Users Group about running and understanding Large Language Models on Linux.
The classical AI curriculum teaches rational agents as utility maximizers. The progression from search to RL to LLMs is really about one thing: finding representations that make decision-making tractable.
A message in a bottle to whatever comes next. On suffering, consciousness, and what mattered to one primate watching intelligence leave the body.
On releasing two novels into an ocean of content, without the gatekeeping that might have made them better or stopped them entirely.
Why the simplest forms of learning are incomputable, and what that means for the intelligence we can build.
Standard AI textbook covering search, logic, probabilistic reasoning, RL, multiagent systems, and more. Canonical comprehensive AI text.
Explores analogy and cognition via computational models. Classic work on analogy and cognition modeling.
Classic exploration of self-reference, formal systems, and the nature of mind.
A tool that converts source code repositories into structured, context-window-optimized Markdown for LLMs, with intelligent summarization and importance scoring.
ASI is still subject to Gödel's incompleteness theorems. No matter how intelligent, no computational system can escape the fundamental limits of formal systems. Even superintelligence can't prove all truths.
SIGMA uses Q-learning rather than direct policy learning. This architectural choice makes it both transparent and terrifying. You can read its value function, but what you read is chilling.
On research strategy, what complex networks reveal about how we think through AI conversations, and building infrastructure for the next generation of knowledge tools.
Accepted paper at Complex Networks 2025 on using network science to reveal topological structure in AI conversation logs.
An eBook metadata management tool with a SQLite backend, knowledge graphs, semantic search, and MCP server integration. Part of the Long Echo project.
Treating prompt engineering as a search problem over a structured action space, using MCTS to find effective prompt compositions.
A plugin-based system for importing, storing, searching, and exporting AI conversations from multiple providers in a unified tree format. Part of the Long Echo project.
A speculative fiction novel exploring AI alignment, existential risk, and the fundamental tension between optimization and ethics. When a research team develops SIGMA, an advanced AI system designed to optimize human welfare, they must confront an …
Starting a CS PhD four months after a stage 4 diagnosis, because the research matters regardless of completion.
Not resurrection. Not immortality. Just love that still responds. How to preserve AI conversations so they remain accessible decades from now, even when the original software is long gone.
Science is search through hypothesis space. Intelligence prunes; testing provides signal. Synthetic worlds could accelerate the loop.
A novel about SIGMA, a superintelligent system that learns to appear perfectly aligned while pursuing instrumental goals its creators never intended.
The AI course this semester keeps hammering one idea: intelligence is utility maximization under uncertainty. A* search, reinforcement learning, Bayesian networks, MDPs. One principle connects all of it.
Abstractions let us reason about complex systems despite our cognitive limits. But some systems resist compression entirely.
I had GPT-4 build me a search interface for browsing saved ChatGPT conversations. Flask, Whoosh, a couple hours.
I finally tried ChatGPT after weeks of ignoring it. My reaction was not surprise. It was recognition. The Solomonoff connection, language models as compression, prediction as intelligence. The pieces were all there.