Testing Quick Reference¶
For developers working on Complex Network RAG
Running Tests¶
# Run all tests
pytest
# Run all tests with coverage
pytest --cov=src --cov-report=term-missing
# Run specific test file
pytest tests/test_repl_session_state.py -v
# Run tests matching pattern
pytest -k "session" -v
# Run with verbose output
pytest -v
# Run fast tests only (skip slow)
pytest -m "not slow"
# Run with parallel execution
pytest -n auto
Test Organization¶
tests/
├── test_storage.py # Storage layer (100% coverage)
├── test_embeddings.py # Embedding providers (82%)
├── test_network_rag.py # Core RAG class (91%)
├── test_fluent_api.py # Fluent API (98%)
├── test_cli.py # CLI interface (99%)
├── test_repl_basic.py # REPL basics (100%)
├── test_repl_commands.py # REPL commands (87%)
├── test_repl_config.py # REPL config (92%)
├── test_repl_session_state.py # REPL sessions (NEW)
├── test_config_builder.py # Config templates (100%)
├── test_yaml_parser.py # YAML DSL (82%)
├── test_structured_integration.py # Integration tests (99%)
└── test_hybrid_linkage_integration.py # Integration tests (100%)
Coverage Targets by Module¶
| Priority | Module | Current | Target | Status |
|---|---|---|---|---|
| ✅ | Storage | 100% | 100% | Excellent |
| ✅ | Linkage | 100% | 100% | Excellent |
| ✅ | Fluent API | 98% | 95%+ | Excellent |
| ✅ | NetworkRAG | 91% | 90%+ | Good |
| ⚠️ | REPL | 70% | 90% | Needs work |
| ❌ | Config Builder | 30% | 85% | CRITICAL |
| ❌ | LLM Providers | 43% | 85% | HIGH |
| ❌ | Visualization | 8% | 60% | MEDIUM |
Writing Good Tests: Quick Checklist¶
✅ DO¶
-
Test behavior, not implementation
-
Use Given-When-Then structure
-
Write descriptive test names
-
Test error paths
-
Use fixtures for common setup
❌ DON'T¶
-
Test private methods
-
Write brittle assertions
-
Create test dependencies
-
Mock everything
Common Test Patterns¶
Pattern 1: Testing Properties¶
def test_connected_property_reflects_storage_state(self):
"""Test connected property returns True when storage exists"""
session = ReplSession()
# Initially not connected
assert not session.connected
# After connecting
session.storage = SQLiteStorage(":memory:")
assert session.connected
Pattern 2: Testing Lifecycle¶
def test_complete_session_lifecycle(self):
"""Test session through connect → use → disconnect"""
session = ReplSession()
# Connect
session.connect(":memory:")
assert session.connected
# Use
session.add_document("content")
assert session.storage.count() == 1
# Disconnect
session.disconnect()
assert not session.connected
Pattern 3: Testing Error Recovery¶
def test_system_survives_operation_error(self):
"""Test system remains usable after error"""
system = System()
# Cause error
with pytest.raises(ValueError):
system.invalid_operation()
# System still works
system.valid_operation() # Should succeed
assert system.is_operational()
Pattern 4: Testing Integration¶
def test_end_to_end_workflow(self):
"""Test complete user workflow"""
# Setup
rag = setup_rag()
# Add documents
rag.add_node("doc1", "content1")
rag.add_node("doc2", "content2")
# Build network
rag.build_network()
# Search
results = rag.search("content").top(5)
# Verify
assert len(results) > 0
Debugging Failing Tests¶
1. Run with verbose output¶
2. Show print statements¶
3. Drop into debugger on failure¶
4. Show full traceback¶
5. Run only failed tests¶
Fixtures Reference¶
Common Fixtures Available¶
# In tests/conftest.py (if it exists) or individual test files
@pytest.fixture
def temp_db():
"""Temporary database file"""
# Use in tests that need file-based storage
@pytest.fixture
def memory_storage():
"""In-memory SQLite storage"""
# Use for fast tests
@pytest.fixture
def tfidf_embedder():
"""TF-IDF embedding provider"""
# Use for tests that need embeddings
@pytest.fixture
def basic_rag(memory_storage, tfidf_embedder):
"""Basic NetworkRAG instance"""
# Use for tests that need RAG
@pytest.fixture
def populated_rag(basic_rag):
"""RAG with sample documents"""
# Use for tests that need data
Coverage Analysis¶
Generate HTML coverage report¶
Show missing lines¶
Coverage for specific module¶
Fail if coverage drops¶
Test Markers (for organization)¶
Define markers in pytest.ini¶
[pytest]
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
integration: marks tests as integration tests
repl: marks REPL-specific tests
cli: marks CLI-specific tests
Use markers in tests¶
@pytest.mark.slow
def test_large_network_build():
"""Test with 10k+ nodes"""
# Slow test
@pytest.mark.integration
def test_complete_workflow():
"""End-to-end integration test"""
# Integration test
Run tests by marker¶
# Skip slow tests
pytest -m "not slow"
# Run only integration tests
pytest -m integration
# Run REPL tests
pytest -m repl
Common Assertions¶
Basic assertions¶
assert condition
assert value == expected
assert value != unexpected
assert value in collection
assert value is None
assert value is not None
Approximate equality (for floats)¶
assert abs(value - expected) < 0.01
# Or use pytest.approx
assert value == pytest.approx(expected, abs=0.01)
Exception testing¶
# Basic exception
with pytest.raises(ValueError):
function_that_raises()
# Exception with message check
with pytest.raises(ValueError, match="specific message"):
function_that_raises()
# Capture exception for inspection
with pytest.raises(ValueError) as exc_info:
function_that_raises()
assert "expected text" in str(exc_info.value)
Collection assertions¶
assert len(collection) == expected_length
assert item in collection
assert set(collection) == expected_set
assert collection[0] == first_item
Mocking (Use Sparingly)¶
Mock a function¶
from unittest.mock import Mock, patch
def test_with_mocked_function():
with patch('module.function') as mock_func:
mock_func.return_value = "mocked result"
result = code_that_calls_function()
assert result == "mocked result"
mock_func.assert_called_once()
Mock a class¶
def test_with_mocked_class():
mock_obj = Mock()
mock_obj.method.return_value = "result"
# Use mock_obj in test
assert mock_obj.method() == "result"
When to mock¶
- External services (APIs, databases in some cases)
- Slow operations (when isolating unit logic)
- Non-deterministic behavior (time, random)
When NOT to mock¶
- Internal application logic (test real behavior)
- Simple functions (just call them)
- Storage operations (use test databases)
Test Data Management¶
Use builders for complex setup¶
class DocumentBuilder:
def __init__(self):
self.data = {}
def with_title(self, title):
self.data['title'] = title
return self
def with_content(self, content):
self.data['content'] = content
return self
def build(self):
return self.data
# Usage
doc = (DocumentBuilder()
.with_title("Test")
.with_content("Content")
.build())
Use fixtures for shared data¶
@pytest.fixture
def sample_documents():
return [
{"id": "doc1", "content": "Content 1"},
{"id": "doc2", "content": "Content 2"},
]
def test_something(sample_documents):
# Use sample_documents
pass
Performance Testing¶
Time test execution¶
Profile tests¶
Mark slow tests¶
CI/CD Integration¶
GitHub Actions example¶
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install pytest pytest-cov
- name: Run tests
run: pytest --cov=src --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
Quick Tips¶
- Run tests often - After every significant change
- Write tests first - TDD when possible
- Keep tests fast - Under 10ms for unit tests
- One assertion per test - Logical assertion (can be multiple assert statements)
- Test edge cases - Empty lists, None values, boundaries
- Clean up resources - Use fixtures with yield
- Read test output - Understand why tests fail
- Update tests when requirements change - Keep tests current
- Don't skip tests - Fix or remove them
- Review coverage regularly - Know what's not tested
Getting Help¶
Documentation¶
pytest --help- All pytest optionspytest --markers- List all markerspytest --fixtures- List all fixtures
Analysis Files¶
TEST_SUITE_ANALYSIS.md- Comprehensive analysisTESTING_IMPROVEMENTS_SUMMARY.md- What was done- This file - Quick reference
Coverage Reports¶
- Run:
pytest --cov=src --cov-report=html - View:
htmlcov/index.html
Remember: Good tests enable fearless refactoring and confident deployment.
Test early. Test often. Test well.