Homomorphic computational extensions
Alex Towell 3/5/2022
We consider homomorphisms which are based on computational concerns
which are used to transform inefficient or lossy computations over some
original domain
into a conceptually equivalent group
over a restricted set of operations. If the original problem can be
solved using these restricted operations, then we may transform
into
and efficiently perform the computations.
Sometimes, the entire solution cannot be transformed back to
,
but some queries can, such as predicates, e.g., is
even though
or
may not be in the domain of
.
Given that type
is a ring
we define a group
with homomorphisms
and
where if
and
are inverses, e.g.,
and
then
and
are isomorphic, although this is not generally the case.
We make a additional requirements that
should be a regular type, e.g., assignment
(
)
and equality
(
)
are defined, and
should have a total ordering
defined. As a result,
will also be a regular type with some ordering relation.
must map values such that
will have the same ordering relation as
on values in the range of
and will extend that relation to any values outside the range of
in the following way: if
and
or
is not in the range of
,
then
This is usually done by making
a strictly monotonically increasing function.
The fact that a value of type
must initially be constructed from a value of type
means that these constructions are lossless. However, since
is a group with operations, namely
,
a value of type
may not be a value in
.
Typically, the value
would conceptually map to would be outside the range of
,
e.g., either too small or too large.
We also can expect, depending on
and
,
some independent sources of loss, e.g., if
introduces some minor loss due to rounding or truncation.
The operation
is defined as
which is associative by the associativity of
.
The inverse operation

is defined as
Finally, the identity
must be given by
thus we see that
must satisfy
i.e.,
and
must map inverses to inverses and
and
must map identity to identity.
is intended to make computations more precise, faster, or less
susceptible to underflows or overflows over a more restricted set of
operations than
where the computational basis is
Use case: applying likelihood functions to observed data
When dealing with very large or small numbers in numerical computation, such as computing likelihoods, it is frequently faster and more precise to work with logarithms of numbers and mapping them back to their actual values only when needed.
This allows multiplication to be as simple and fast as addition, and
avoids most intermediate values that would otherwise result in
underflows or overflows even when the final result is within the range
of
.
If the final result is not within the range of
,
which can be checked, and the actual value is needed for additional
computations, other steps may be taken, e.g., map the result to a slow
bignum data type.