CBT (Computational Basis Transforms) is a C++17 header-only library that formalizes a fundamental principle underlying many computational techniques: transforming problems into different representational domains where certain operations become more efficient.
The Core Insight
Consider these seemingly unrelated techniques:
- FFT transforms convolution (O(n²)) into pointwise multiplication (O(n))
- Logarithms transform multiplication into addition
- Odds ratios transform Bayesian updates into simple multiplication
They all share the same pattern: transform to a domain where your target operation is cheap.
CBT formalizes this as a quadruple (D, D’, φ, Ω) where:
- D, D’ are computational domains
- φ: D → D’ is the transform
- Ω captures the trade-offs
There is no universal “best” representation—only fitness for purpose. Every transform involves fundamental trade-offs.
Quick Start
#include <cbt/cbt.hpp>
int main() {
using namespace cbt;
// Avoid underflow in products of small probabilities
lgd product(1e-100);
for(int i = 0; i < 1000000; ++i) {
product = product * lgd(1e-100); // No underflow!
}
// Bayesian inference without normalization
auto prior = odds_ratio<double>::from_probability(0.01);
auto posterior = prior * likelihood_ratio; // Simple multiplication!
return 0;
}
Available Transforms
Logarithmic Transform (cbt::lg<T>)
Transform multiplication into addition, handle extreme ranges:
cbt::lgd a(1000), b(2000);
auto product = a * b; // Internally: log(1000) + log(2000)
auto huge = lgd::from_log(800); // Represents e^800 without overflow
Trade-off: Gains efficient multiplication and extended range; loses direct addition.
Use cases: Products of many terms, wide dynamic ranges, ML log-likelihoods.
Odds-Ratio Transform (cbt::odds_ratio<T>)
Transform Bayesian updates into multiplication:
auto prior = cbt::odds_ratio<double>::from_probability(0.01);
auto posterior = prior * likelihood_ratio; // Bayesian update!
double prob = posterior.to_probability();
Trade-off: Gains simple sequential updates without normalization; loses direct probability addition.
Use cases: Medical diagnosis, spam filtering, any sequential Bayesian inference.
Stern-Brocot Transform (cbt::stern_brocot<T>)
Exact rational arithmetic without floating-point errors:
cbt::stern_brocot<int> a(1, 3), b(1, 7);
auto sum = a + b; // Exact: 10/21, no rounding errors!
auto pi = stern_brocot<int>::approximate(3.14159, 1000); // Best rational approx
Trade-off: Gains exact arithmetic; loses finite representation for irrationals.
Use cases: Computer algebra, music theory (pure intervals), continued fractions.
Residue Number System (cbt::residue_number_system<T,N>)
Fully parallel arithmetic:
auto rns_a = cbt::rns3<int>::from_integer(12345);
auto rns_b = cbt::rns3<int>::from_integer(67890);
auto product = rns_a * rns_b; // No carry propagation, fully parallel!
Trade-off: Gains parallelism; loses efficient comparison and division.
Use cases: Cryptography, DSP, fault-tolerant computing.
Multiscale Transform (cbt::multiscale<T>)
Handle 200+ orders of magnitude:
cbt::multiscale<double> planck_length(1.616e-35);
cbt::multiscale<double> universe_diameter(8.8e26);
auto ratio = universe_diameter / planck_length; // 61 orders of magnitude!
Trade-off: Gains extreme range; loses precision at scale boundaries.
Use cases: Astrophysics, quantum mechanics, multi-scale simulations.
Additional Transforms
| Transform | Class | Purpose |
|---|---|---|
| Dual | dual<T> | Automatic differentiation |
| Interval | interval<T> | Rigorous error bounds |
| Tropical | tropical<T> | Min-plus algebra (shortest paths) |
| Modular | modular<T,M> | Cyclic arithmetic |
| Quaternion | quaternion<T> | Gimbal-lock-free 3D rotations |
Transform Composition
CBTs compose for multiplicative benefits:
// Extreme scale multiplication efficiency
using extreme_mult = cbt::multiscale<cbt::lg<double>>;
// Stable Bayesian inference
using log_odds = cbt::lg<cbt::odds_ratio<double>>;
// Exact rational intervals
using rational_interval = cbt::interval<cbt::stern_brocot<int>>;
Real-World Examples
Avoiding Numerical Underflow
// Problem: Product of many small probabilities underflows
double compute_likelihood_naive(const std::vector<double>& probs) {
double result = 1.0;
for(double p : probs) {
result *= p; // Underflows to 0!
}
return result;
}
// Solution: Logarithmic transform
double compute_likelihood_stable(const std::vector<double>& probs) {
cbt::lgd result(1.0);
for(double p : probs) {
result = result * cbt::lgd(p); // No underflow!
}
return result.value();
}
Medical Diagnosis System
class MedicalDiagnosis {
cbt::odds_ratio<double> disease_odds;
public:
MedicalDiagnosis(double prevalence)
: disease_odds(cbt::odds_ratio<double>::from_probability(prevalence)) {}
void update_with_test(double sensitivity, double specificity) {
double lr = sensitivity / (1.0 - specificity);
disease_odds = disease_odds * cbt::odds_ratio<double>(lr);
}
double get_probability() const {
return disease_odds.to_probability();
}
};
// Usage
MedicalDiagnosis covid_test(0.01); // 1% prevalence
covid_test.update_with_test(0.98, 0.95); // First test
covid_test.update_with_test(0.98, 0.95); // Second test
// Probability updates correctly without manual normalization
Installation
CBT is header-only—just include and use:
#include <cbt/cbt.hpp> // All transforms
// Or specific transforms:
#include <cbt/logarithmic.hpp>
#include <cbt/odds_ratio.hpp>
CMake integration:
add_subdirectory(path/to/cbt)
target_link_libraries(your_target PRIVATE cbt::cbt)
Why This Matters
The CBT framework provides:
- Unifying Principle: Understand diverse techniques as instances of one pattern
- Design Guidance: Meta-patterns help discover new transforms
- Composition Rules: Transforms can be systematically combined
- Trade-off Analysis: Makes implicit costs explicit
Rather than learning FFT, log-arithmetic, and odds ratios as separate tools, CBT reveals they’re all the same idea applied to different domains.
Resources
- GitHub: github.com/queelius/cbt
- Paper: See
paper/cbt_theory.texfor formal treatment
CBT: When you understand that all computational tricks are just basis transforms, you stop memorizing techniques and start deriving them.
Discussion