src2md: Context-Window-Optimized Code for LLMs
A tool that converts source code repositories into structured, context-window-optimized representations for Large Language Models with intelligent summarization.
Browse posts by tag
A tool that converts source code repositories into structured, context-window-optimized representations for Large Language Models with intelligent summarization.
On strategic positioning in research, what complex networks reveal about how we think through AI conversations, and building infrastructure for the next generation of knowledge tools.
A new approach to LLM reasoning that combines Monte Carlo Tree Search with structured action spaces for compositional prompting.
A powerful, plugin-based system for managing AI conversations from multiple providers. Import, store, search, and export conversations in a unified tree format while preserving provider-specific details. Built for the Long Echo project—preserving AI …
A logic programming system that alternates between wake and sleep phases—using LLMs for knowledge generation during wake, and compression-based learning during sleep.
A mathematical framework that treats language models as algebraic objects with rich compositional structure.
Applying Monte Carlo Tree Search to large language model reasoning with a rigorous formal specification.
This blog post is from a chat I had with a ChatGPT, which can be found here and here.
I’m not sure if this is a good blog post, but I’m posting it anyway. It’s remarkable how quickly you can slap stuff like this together, and …
After discovering ChatGPT in late 2022, I became obsessed with running LLMs locally. Cloud APIs are convenient, but I wanted:
I finally noticed ChatGPT this week. Everyone’s been talking about it for weeks, but I was buried in cancer treatment, chemo recovery, surgery prep, and thesis work on Weibull distributions.
When I finally tried it, my reaction wasn’t …