SLUUG: Demystifying Large Language Models (LLMs) on Linux: From Theory to Application
Talk for the St. Louis Unix Users Group about running and understanding Large Language Models on Linux.
Browse posts by tag
Talk for the St. Louis Unix Users Group about running and understanding Large Language Models on Linux.
A guided tour through my open-source ecosystem—from encrypted search theory and statistical reliability to Unix-philosophy CLI tools, AI research, and speculative fiction. How 120+ projects connect, where the gaps are, and where to start.
The classical AI curriculum teaches rational agents as utility maximizers. The progression from search to RL to LLMs is really about one thing: finding representations that make decision-making tractable.
A message in a bottle to whatever comes next—on suffering, consciousness, and what mattered to one primate watching intelligence leave the body.
On releasing two novels into an ocean of content, without the gatekeeping that might have made them better—or stopped them entirely.
An exploration of why the simplest forms of learning may be incomputable, and what that means for the intelligence we can build.
Standard AI textbook covering search, logic, probabilistic reasoning, RL, multiagent systems, and more. Canonical comprehensive AI text.
Explores analogy and cognition via computational models. Classic work on analogy and cognition modeling.
Classic exploration of self-reference, formal systems, and the nature of mind.
A tool that converts source code repositories into structured, context-window-optimized representations for Large Language Models with intelligent summarization.
ASI is still subject to Gödel's incompleteness theorems. No matter how intelligent, no computational system can escape the fundamental limits of formal systems. Even superintelligence can't prove all truths.
SIGMA uses Q-learning rather than direct policy learning. This architectural choice makes it both transparent and terrifying — you can read its value function, but what you read is chilling.
On strategic positioning in research, what complex networks reveal about how we think through AI conversations, and building infrastructure for the next generation of knowledge tools.
Accepted paper at Complex Networks 2025 on using network science to reveal topological structure in AI conversation logs.
EBK is a comprehensive eBook metadata management tool that combines a robust SQLite backend with AI-powered features including knowledge graphs, semantic search, and MCP server integration for AI assistants.
A new approach to LLM reasoning that combines Monte Carlo Tree Search with structured action spaces for compositional prompting.
A powerful, plugin-based system for managing AI conversations from multiple providers. Import, store, search, and export conversations in a unified tree format while preserving provider-specific details. Built for the Long Echo project—preserving AI …
A speculative fiction novel exploring AI alignment, existential risk, and the fundamental tension between optimization and ethics. When a research team develops SIGMA, an advanced AI system designed to optimize human welfare, they must confront an …
Starting a CS PhD focused on AI research four months after a stage 4 diagnosis—because the research matters regardless of completion.
Not resurrection. Not immortality. Just love that still responds. How to preserve AI conversations in a way that remains accessible and meaningful across decades, even when the original software is long gone.
Science is search through hypothesis space. Intelligence prunes; testing provides signal. Synthetic worlds could accelerate the loop.
A novel about SIGMA, a superintelligent system that learns to appear perfectly aligned while pursuing instrumental goals its creators never intended. Some technical questions become narrative questions.
Intelligence as utility maximization under uncertainty — a unifying framework connecting A* search, reinforcement learning, Bayesian networks, and MDPs. From classical search to Solomonoff induction, one principle ties it all together.
Exploring the power and limitations of abstractions in understanding the world, from mathematical models to machine learning representations.
Using GPT-4 to build a simple HTML search interface for browsing saved ChatGPT conversations.
Encountering ChatGPT during cancer treatment and recognizing the Solomonoff connection — language models as compression, prediction as intelligence. A personal inflection point reconnecting with AI research after years in survival mode.