Installation¶
Requirements¶
- Python 3.8 or higher
- No external dependencies for core functionality
- Optional: API keys for LLM providers
Install from Source¶
# Clone the repository
git clone https://github.com/queelius/dreamlog.git
cd dreamlog
# Install in development mode
pip install -e .
Install with pip (coming soon)¶
Install with Extra Dependencies¶
# For REST API server
pip install dreamlog[api]
# For Jupyter integration
pip install dreamlog[jupyter]
# For VS Code Language Server
pip install dreamlog[lsp]
# All integrations
pip install dreamlog[all]
Verify Installation¶
# Test basic functionality
from dreamlog.pythonic import dreamlog
jl = dreamlog()
jl.fact("parent", "john", "mary")
for result in jl.query("parent", "john", "X"):
print(f"Success! Found: {result['X']}")
Configure LLM Providers¶
OpenAI¶
export OPENAI_API_KEY="sk-..."
export DreamLog_LLM_PROVIDER="openai"
export OPENAI_MODEL="gpt-4" # Optional, defaults to gpt-3.5-turbo
Anthropic¶
export ANTHROPIC_API_KEY="sk-ant-..."
export DreamLog_LLM_PROVIDER="anthropic"
export ANTHROPIC_MODEL="claude-3-opus-20240229" # Optional
Ollama (Local)¶
# Start Ollama server first
ollama serve
# Configure DreamLog
export DreamLog_LLM_PROVIDER="ollama"
export OLLAMA_MODEL="llama2" # Or any model you have
export OLLAMA_BASE_URL="http://localhost:11434" # Default
Configuration File¶
Create ~/.dreamlog/llm_config.json:
Test LLM Integration¶
from dreamlog.pythonic import dreamlog
# Will auto-detect from environment or config file
jl = dreamlog(llm_provider="openai")
# Query for undefined knowledge
# LLM will generate relevant facts/rules
for result in jl.query("capital", "france", "X"):
print(f"The capital of France is {result['X']}")
Troubleshooting¶
Import Error¶
If you get import errors, ensure DreamLog is in your Python path:
LLM Provider Not Found¶
If LLM auto-detection fails:
from dreamlog.llm_config import LLMConfig
# List available providers
print(LLMConfig.list_providers())
# Check current configuration
provider = LLMConfig.auto_detect()
print(f"Detected: {type(provider).__name__}")
Network Issues with LLM¶
For environments behind proxies:
export HTTP_PROXY="http://proxy.example.com:8080"
export HTTPS_PROXY="http://proxy.example.com:8080"
Next Steps¶
- Quick Start Guide - Get running in 5 minutes
- Tutorial - Learn DreamLog step by step
- LLM Configuration - Advanced LLM setup