Configuration Guide¶
ebk uses a centralized configuration system that stores all settings in a single JSON file. This guide covers how to set up and manage your ebk configuration.
Configuration File Location¶
The configuration file is stored at:
On Windows:
For backward compatibility, ebk also supports the legacy ~/.ebkrc file, but the new JSON format is recommended.
Configuration Structure¶
The configuration file is organized into four main sections:
LLM Configuration¶
Settings for Large Language Model providers used in metadata enrichment:
{
"llm": {
"provider": "ollama",
"model": "llama3.2",
"host": "localhost",
"port": 11434,
"api_key": null,
"temperature": 0.7,
"max_tokens": null
}
}
- provider: LLM provider name (
ollama,openai, etc.) - model: Model name to use
- host: Server hostname (for remote GPU servers)
- port: Server port
- api_key: API key for commercial providers (future)
- temperature: Sampling temperature (0.0-1.0, lower = more consistent)
- max_tokens: Maximum tokens to generate (null = provider default)
Server Configuration¶
Settings for the web server interface:
- host: Server bind address (
0.0.0.0= all interfaces,127.0.0.1= localhost only) - port: Server port
- auto_open_browser: Automatically open browser when starting server
- page_size: Default number of books per page
CLI Configuration¶
Default CLI behavior settings:
- verbose: Enable verbose logging by default
- color: Enable colored output (uses Rich)
- page_size: Default page size for list commands
Library Configuration¶
Library-related preferences:
- default_path: Default library path (can be used with commands that accept an optional path)
Managing Configuration¶
Initialize Configuration¶
Create a configuration file with default values:
This creates ~/.config/ebk/config.json if it doesn't exist.
View Configuration¶
View the entire configuration:
View a specific section:
Edit Configuration¶
Edit the configuration file in your default editor:
This opens the JSON file in the editor specified by your EDITOR environment variable.
Set Configuration Values¶
Set individual configuration values:
# Set LLM provider
ebk config set llm.provider ollama
ebk config set llm.model llama3.2
ebk config set llm.host localhost
# Set server options
ebk config set server.port 8080
ebk config set server.auto_open_browser true
# Set CLI defaults
ebk config set cli.verbose true
ebk config set cli.page_size 100
# Set library defaults
ebk config set library.default_path ~/my-ebooks
Get Configuration Values¶
Retrieve specific configuration values:
Common Configuration Scenarios¶
Local Ollama Setup¶
For running LLM features with a local Ollama instance:
ebk config init
ebk config set llm.provider ollama
ebk config set llm.model llama3.2
ebk config set llm.host localhost
ebk config set llm.port 11434
Then install and start Ollama:
# Install Ollama from https://ollama.com
# Pull the model
ollama pull llama3.2
# Ollama runs automatically on port 11434
Remote GPU Server¶
For using a remote GPU server (e.g., basement server with NVIDIA GPU):
Or use hostname:
Web Server for Local Access Only¶
For security, bind the web server to localhost only:
Web Server for Network Access¶
To allow access from other devices on your network:
Then access from other devices at http://your-ip:8000
Development Mode¶
For development with verbose output:
CLI Overrides¶
All configuration settings can be overridden via CLI arguments:
# Override server settings
ebk serve ~/library --host 127.0.0.1 --port 9000
# Override LLM settings
ebk enrich ~/library --host 192.168.1.50 --model mistral
# Enable verbose mode for single command
ebk --verbose import book.pdf ~/library
CLI arguments always take precedence over configuration file values.
Environment Variables¶
Some settings can also be controlled via environment variables:
# Ollama settings
export OLLAMA_HOST=192.168.1.100
export OLLAMA_PORT=11434
# Editor for config edit
export EDITOR=vim
Configuration Best Practices¶
-
Set default library path: If you primarily work with one library, set it as default:
-
Use remote GPU for better performance: If you have a machine with a GPU, configure it as the LLM host:
-
Enable auto-open for convenience: If you frequently use the web interface:
-
Lower temperature for metadata: For consistent metadata generation:
-
Adjust page size for your screen: Larger screens can show more items:
Troubleshooting¶
Configuration file not found¶
If commands don't use your configuration:
Invalid JSON¶
If you manually edited the config and broke it:
LLM connection issues¶
If metadata enrichment fails:
# Test Ollama connection
curl http://localhost:11434/api/tags
# Or remote server
curl http://192.168.1.100:11434/api/tags
# Check configuration
ebk config get llm.host
ebk config get llm.port
Server won't start¶
If the web server fails to start:
# Check if port is in use
lsof -i :8000
# Try different port
ebk serve ~/library --port 8080
# Or update config
ebk config set server.port 8080
Next Steps¶
- Quick Start Guide - Start using ebk
- LLM Features - Learn about AI-powered features
- Web Server Guide - Use the web interface
- CLI Reference - Complete command reference