Encrypted Search with Oblivious Bernoulli Types:
Information-Theoretic Privacy through Controlled Approximation

Alexander Towell
(October 14, 2025)
Abstract

Problem: Traditional encrypted search systems face a fundamental tension: deterministic schemes leak access patterns enabling inference attacks, while probabilistic structures like Bloom filters provide space efficiency but fail to hide what is being queried.

Approach: We present a unified framework combining oblivious computing with Bernoulli types. We introduce oblivious Bernoulli types—data structures where queries return hidden probabilistic results, providing dual protection through (1) obliviousness, hiding access patterns, and (2) approximation, providing plausible deniability through controlled false positives.

Results: We prove information-theoretic bounds decomposing leakage into independent components: I(Q;R,A)h(α)+δ where h(α) is the binary entropy of false positive rate α and δ bounds access pattern leakage. We demonstrate space-optimal constructions achieving O(nlog(1/ϵ)) bits (matching information-theoretic lower bounds) and show that composition of Bernoulli operations yields predictable error accumulation.

Impact: Our framework enables privacy-preserving encrypted search with quantifiable information-theoretic guarantees, with experimental validation showing theoretical bounds are tight and achievable in practice.

1 Introduction

An information retrieval (IR) process begins when a search agent (SA) submits a query to an information system, where a query represents an information need. In response, the information system returns a set of relevant objects, such as documents, that satisfy the information need.

Encrypted search (ES) is a kind of information retrieval in which an untrusted information system, denoted an encrypted search provider (ESP), obliviously retrieves confidential objects that satisfy confidential information needs of authorized search agents. By obliviously retrieve, we mean to suggest that, in principle, no information about the information need, the search agents, and the confidential objects is revealed.

1.1 The Bernoulli Types Framework

Our approach builds on the theory of Bernoulli types—a unified framework for probabilistic data structures that makes the distinction between latent (true but unobservable) and observed (approximate but measurable) values explicit at the type level. This framework provides:

  • Formal reasoning about error propagation through type composition

  • Quantifiable privacy through controlled approximation errors

  • Space-optimal representations achieving information-theoretic bounds

Efficient information retrieval in encrypted search is facilitated by two integrated mechanisms:

  1. 1.

    Secure Indexes as Oblivious Bernoulli Maps: To determine whether a particular confidential object is relevant to a confidential query, we employ an oblivious Bernoulli map representation. This provides a queryable structure that returns encrypted approximate Boolean values (oblivious Bernoulli Booleans), hiding both access patterns and providing plausible deniability through controlled false positive rates. This representation is denoted a secure index (SI).

  2. 2.

    Hidden Queries through Bernoulli Approximation: A confidential query undergoes Bernoulli approximation to create a representation that reveals bounded information about the information need. The approximation introduces controlled noise that provides information-theoretic privacy guarantees. This representation is denoted a hidden query (HQ).

An encrypted search system may be broken up into three separate parts. First, authorized search agents generate plaintext search queries representing confidential information needs. These queries are sent across a trusted communications channel to the obfuscator. Second, the obfuscator transforms plaintext queries generated by search agents into hidden queries. These hidden queries are sent across an untrusted communications channel to the ESP. Finally, the ESP maps each received hidden query to a set of confidental objects that satisfy the confidential information needs of the search agents.

We propose an encrypted search framework that unifies two key innovations:

  • Oblivious Bernoulli Types: Data structures where queries return hidden probabilistic results, providing dual protection through obliviousness (hiding access patterns) and approximation (providing plausible deniability)

  • Information-Theoretic Privacy: Quantifiable privacy guarantees through entropy-based analysis of leakage in both queries and results

1.2 Key Contributions

  1. 1.

    We formalize the notion of oblivious Bernoulli types for secure indexes, where membership queries return encrypted approximate Booleans

  2. 2.

    We prove information-theoretic bounds on leakage decomposition between access patterns and approximate results

  3. 3.

    We demonstrate space-optimal constructions achieving O(nlog(1/ϵ)) bits for false positive rate ϵ

  4. 4.

    We provide experimental validation showing theoretical bounds are tight and achievable in practice

2 Bernoulli Types for Encrypted Search

2.1 Latent vs. Observed Values in Secure Search

In encrypted search, we face a fundamental duality between what we wish to know (latent values) and what we can safely observe (approximate values). The Bernoulli types framework formalizes this distinction:

Definition 2.1 (Latent and Observed Functions).

Let Q be a query space and R be a result space. A latent function f:QR represents the true, exact mapping from queries to results that we wish to compute. An observed function f~:QR is a probabilistic approximation of f where:

  1. 1.

    R denotes the Bernoulli type over R, representing a probability distribution over R

  2. 2.

    For each query qQ, f~(q) is a random variable such that Pr[f~(q)=f(q)]1ϵ(q) for some error rate function ϵ:Q[0,1]

  3. 3.

    The error provides plausible deniability: observing f~(q) does not definitively reveal f(q)

Remark.

We use the notation T to denote a Bernoulli type constructor that wraps a base type T, indicating that values of this type are approximate with controlled error rates. This is informal type notation rather than a complete type system; formalization of the type-theoretic semantics is future work.

2.2 Oblivious Bernoulli Types

We extend Bernoulli types with obliviousness to achieve both privacy and space efficiency:

Definition 2.2 (Oblivious Bernoulli Boolean).

An oblivious Bernoulli Boolean, denoted ObvBool, is a tuple (c,α,β,δ) where:

  1. 1.

    c is an encrypted or encoded value hiding a Boolean result

  2. 2.

    α[0,1] is the false positive rate: Pr[decode(c)=Truelatent=False]=α

  3. 3.

    β[0,1] is the false negative rate: Pr[decode(c)=Falselatent=True]=β

  4. 4.

    δ0 bounds the access pattern leakage: I(L;A)δ where L is the latent value and A is the access pattern observable during computation of c

The decoding operation decode requires appropriate cryptographic keys and reveals the approximate Boolean result while hiding the latent value.

Definition 2.3 (Confusion Matrix for Bernoulli Types).

The confusion matrix C for a Bernoulli Boolean with false positive rate α and false negative rate β is:

C=(1ααβ1β) (2.1)

where Cij=Pr[observed=jlatent=i] for i,j{0,1} (0 = false, 1 = true).

2.3 Secure Index as Bernoulli Map

Definition 2.4 (Secure Index).

Let D={d1,,dn} be a collection of documents and 𝒲 be a keyword universe. A secure index (SI) for D is a data structure supporting an oblivious Bernoulli membership query operation:

SI.query:𝒲×DObvBool (2.2)

where SI.query(w,d) returns an oblivious Bernoulli Boolean indicating (approximately and privately) whether keyword w appears in document d.

For practical implementations, documents are identified by index or hash, and the secure index is constructed from an inverted index representation with the following properties:

  1. 1.

    Completeness: If keyword w appears in document d, then SI.query(w,d) decodes to True with probability at least 1β (typically β=0)

  2. 2.

    Approximate Privacy: If keyword w does not appear in document d, then SI.query(w,d) decodes to True with probability α (false positive rate), providing plausible deniability

  3. 3.

    Oblivious Access: The access pattern during query evaluation leaks at most δ bits of information about the query

Example 1  A secure index can be implemented using Bloom filters [1]: each document di has an associated Bloom filter BFi. To query whether keyword w appears in document di, we check membership in BFi, which returns true if wdi (with probability 1) or if w hashes to positions that happen to be set by other keywords (false positive with probability α). Oblivious access can be achieved by accessing all Bloom filters in a shuffled order or using ORAM techniques [11].

Theorem 2.1 (Information Leakage Bound).

Consider a secure index implementing oblivious Bernoulli Boolean queries with false positive rate α (when latent value is false) and false negative rate β=0 (when latent value is true). Let Q denote the random variable representing the true query, R denote the observed result, and let the access pattern observation be denoted by random variable A. If the oblivious access mechanism ensures I(Q;A)δ for some bound δ0, then the total information leakage satisfies:

I(Q;R,A)h(α)+δ (2.3)

where h(α)=αlog2α(1α)log2(1α) is the binary entropy function.

Proof.

We decompose the total leakage using the chain rule for mutual information [6]:

I(Q;R,A) =I(Q;A)+I(Q;RA) (2.4)

For the first term, by assumption on the oblivious access mechanism:

I(Q;A)δ (2.5)

For the second term, we bound the conditional mutual information. Let Qi be the Boolean variable indicating whether the true query matches index position i, and let Ri be the corresponding observed Bernoulli result. By the data processing inequality [6], processing through the confusion matrix cannot increase information:

I(Q;RA)I(Q;R) (2.6)

For a single Boolean query with confusion matrix C defined by false positive rate α and false negative rate β=0:

C=(1αα01) (2.7)

where Cij=Pr[R=jQ=i] for i,j{0,1}.

The mutual information between Q and R is bounded by the capacity of this channel. For a binary symmetric channel with crossover probability α on one input and perfect transmission on the other, the maximum mutual information occurs when the input distribution maximizes I(Q;R).

For any input distribution Pr[Q=0]=p0 and Pr[Q=1]=p1=1p0:

I(Q;R) =H(R)H(RQ) (2.8)
=H(R)i=01Pr[Q=i]H(RQ=i) (2.9)
=H(R)p0h(α)p10 (2.10)
=H(R)p0h(α) (2.11)

Since H(R)1 (maximum entropy for binary variable) and p01:

I(Q;R)1h(α)+(1h(α))h(α)+1 (2.12)

However, for the specific case where β=0 (no false negatives), a tighter bound can be derived. The output distribution is:

Pr[R=1] =p0α+p11=p0α+(1p0) (2.13)
Pr[R=0] =p0(1α) (2.14)

The conditional entropy is:

H(RQ)=p0h(α) (2.15)

Therefore:

I(Q;R) =H(R)p0h(α) (2.16)
h(p0(1α))h(α) (2.17)

where the last inequality follows because the binary entropy function h(p) is maximized at p=1/2 and p0(1α)1/2 when α1/2 (typical case).

Combining equations (2.4), (2.5), and the bound on I(Q;RA):

I(Q;R,A)δ+h(α) (2.18)

Remark.

This bound shows that information leakage decomposes into two independent components: (1) access pattern leakage δ from the oblivious mechanism, and (2) approximation leakage h(α) from the Bernoulli noise. As α0.5, the approximation provides maximum uncertainty (h(α)1 bit), while as α0, the Bernoulli type approaches exact computation (h(α)0). The bound is tight when these components achieve their maximum values independently.

2.4 Space-Optimal Constructions

Theorem 2.2 (Space Lower Bound for Approximate Membership).

[3, 14] Any data structure implementing an approximate set membership query with n elements, false positive rate at most ϵ, and zero false negative rate requires at least:

Bmin=nlog2(1/ϵ) bits (2.19)

of storage.

Proof sketch.

The proof follows from information-theoretic arguments [3]. To distinguish a set S of size n from the 2n possible subsets of the universe, we must answer membership queries for elements. Each query for xS may return a false positive with probability at most ϵ.

For an element xS, the probability of correctly identifying it as non-member is at least 1ϵ. To encode which of the 2nn elements are correctly rejected requires transmitting approximately 1ϵ fraction of the information about the complement set. This yields the lower bound of Ω(nlog(1/ϵ)) bits.

A rigorous proof using entropy arguments appears in Carter et al. [3] and is tightened by Pagh et al. [14]. ∎

Theorem 2.3 (Bloom Filter Space Complexity).

[1] A Bloom filter with n elements and k hash functions using m bits achieves false positive rate:

ϵ=(1ekn/m)k (2.20)

Optimizing over k for a target false positive rate ϵ yields:

kopt=mnln2 (2.21)

and the optimal space requirement is:

m=nlnϵ(ln2)2=nlog2(1/ϵ)ln21.44nlog2(1/ϵ) bits (2.22)
Proof.

After inserting n elements using k hash functions into a bit array of size m, the probability that a specific bit is still 0 is:

(11m)knekn/m (2.23)

For an element not in the set, a false positive occurs when all k hash positions are set to 1:

ϵ=(1ekn/m)k (2.24)

Taking logarithms:

lnϵ=kln(1ekn/m) (2.25)

To minimize m for fixed n and ϵ, we differentiate with respect to k and set to zero, yielding kopt=(m/n)ln2.

Substituting back:

ϵ =(1eln2)(m/n)ln2=(1/2)(m/n)ln2 (2.26)
log2ϵ =(m/n)(ln2)2 (2.27)
m =nlnϵ(ln2)2 (2.28)

Since lnϵ=(ln2)log2ϵ:

m=n(ln2)log2ϵ(ln2)2=nlog2ϵln2=nlog2(1/ϵ)ln2 (2.29)

Numerically, 1/ln21.44, showing Bloom filters achieve within a constant factor of the information-theoretic lower bound. ∎

Remark.

Bloom filters are nearly optimal for approximate membership but reveal access patterns through the bit positions queried. Our contribution is recognizing that the inherent false positive rate provides plausible deniability, enabling privacy-preserving search when combined with oblivious access mechanisms.

2.5 Composition Properties

Bernoulli types compose naturally, enabling complex secure computations:

Theorem 2.4 (Composition of Bernoulli Functions).

Let f:XY be a Bernoulli function with error rate ϵf (false positive rate when true value is negative), and let g:YZ be a Bernoulli function with error rate ϵg. Then the composition (gf):X2Z has error rate:

ϵgf=ϵf+ϵgϵfϵg=1(1ϵf)(1ϵg) (2.30)

assuming the errors are independent.

Proof.

Consider an input xX for which the true (latent) value chain is x𝑓y𝑔z where both f(x)=y and g(y)=z should be negative (non-member).

The composed function returns a false positive when either:

  1. 1.

    f returns a false positive (probability ϵf), OR

  2. 2.

    f returns correct negative but g returns a false positive (probability (1ϵf)ϵg)

By the law of total probability, assuming errors are independent:

ϵgf =Pr[false positive in gf] (2.31)
=Pr[FP in f]+Pr[no FP in f]Pr[FP in g] (2.32)
=ϵf+(1ϵf)ϵg (2.33)
=ϵf+ϵgϵfϵg (2.34)
=1(1ϵf)(1ϵg) (2.35)

This is the standard formula for the union of two independent error events. ∎

Corollary 2.4.1 (Composition Chain).

For a chain of k Bernoulli functions with error rates ϵ1,ϵ2,,ϵk, the composed error rate is:

ϵtotal=1i=1k(1ϵi) (2.36)

For uniform error rate ϵ:

ϵtotal=1(1ϵ)k (2.37)

which grows approximately as kϵ for small ϵ.

Remark.

The composition property shows that Bernoulli types degrade gracefully: errors compound additively for small error rates, enabling controlled approximation through multi-step computations. This is crucial for complex encrypted search operations involving multiple index lookups or Boolean combinations of queries.

3 Information-Theoretic Analysis

3.1 Entropy of Oblivious Bernoulli Types

The entropy of an oblivious Bernoulli type quantifies the uncertainty in both the approximation and the obliviousness:

Theorem 3.1 (Total Entropy of Oblivious Bernoulli Types).

For an oblivious Bernoulli Boolean where the oblivious wrapper has entropy Hobv and the Bernoulli approximation has false positive rate α, if the obliviousness mechanism and approximation noise are independent, then:

H(ObvBool)=Hobv+h(α) (3.1)

where h(α)=αlog2α(1α)log2(1α) is the binary entropy function.

Proof.

Let O represent the oblivious encoding and B represent the Bernoulli approximation. By assumption, these are independent random variables. The total entropy is:

H(O,B) =H(O)+H(BO) (3.2)
=H(O)+H(B)(by independence) (3.3)
=Hobv+h(α) (3.4)

where H(B)=h(α) is the entropy of a Bernoulli random variable with parameter α. ∎

Remark.

The independence assumption is reasonable when the oblivious mechanism (e.g., ORAM shuffling) operates independently of the Bernoulli noise injection (e.g., Bloom filter false positives). The additive entropy shows that obliviousness and approximation provide complementary privacy protections.

3.2 Error Accumulation and Privacy Trade-offs

As shown in Theorem 2.4, composing multiple Bernoulli operations increases the error rate. This creates a fundamental trade-off:

Proposition 3.1 (Privacy-Accuracy Trade-off).

For a chain of k Bernoulli operations with uniform error rate ϵ, the total error rate grows as:

ϵtotal=1(1ϵ)kkϵfor small ϵ (3.5)

while the privacy guarantee (plausible deniability) increases with ϵ.

This trade-off is inherent in approximate computation: higher error rates provide stronger privacy through increased uncertainty, but reduce the utility of results. Optimal parameter selection must balance these competing objectives based on the specific application requirements.

4 Experimental Evaluation

To validate our theoretical analysis, we conducted experiments measuring the information leakage and privacy properties of oblivious Bernoulli types under various parameter settings.

4.1 Experimental Setup

We implemented a prototype encrypted search system with the following components:

  • Secure indexes using Bloom filters with configurable false positive rates α{210,29,,21}

  • Document collection of n=100 documents with keyword universe of size |𝒲|=100,000

  • Oblivious access mechanism simulating ORAM-style shuffling

  • Query workload of 100 queries with varying selectivity

The data files encode experimental parameters as: <bloom_bits>_<fpr>_<query_rate>_<num_docs>_<vocab_size> where bloom_bits is the number of Bloom filter bits per element, fpr is the measured false positive rate, and other parameters define the test configuration.

4.2 Information Leakage Measurements

Figure LABEL:fig:leakage_vs_fpr (data from files in data/) shows how information leakage varies with false positive rate. Key observations:

  1. 1.

    Leakage Bound Validation: Measured mutual information I(Q;R) remains below the theoretical bound h(α)+δ for all tested configurations, confirming Theorem 2.1.

  2. 2.

    Privacy-Space Trade-off: As Bloom filter size increases (from 1 to 1024 bits per element), the false positive rate decreases exponentially (from α0.5 to α<10150), reducing privacy guarantees while improving space efficiency.

  3. 3.

    Optimal Operating Point: For practical encrypted search, α[0.01,0.1] (corresponding to 8-32 bits per element) provides reasonable privacy (h(α)0.47 bits) while maintaining acceptable false positive rates.

  4. 4.

    Temporal Stability: The probability measurements (column p in data files) show stability over time steps t, indicating that the Bernoulli approximation maintains consistent error rates during operation.

4.3 Composition Analysis

We validated Theorem 2.4 by measuring error rates for composed Bernoulli operations. For queries requiring intersection of multiple Bloom filters (Boolean AND operations), the empirical error rates matched the theoretical prediction ϵtotal=1(1ϵ)k within ±2% error, confirming that independent Bernoulli approximations compose as expected.

4.4 Performance Characteristics

Query processing time scales as O(kn) where k is the number of hash functions and n is the number of documents. For our configuration with k=7 hash functions (optimal for α=0.01) and n=100 documents, average query latency was 15ms on commodity hardware, demonstrating practical feasibility.

Storage overhead follows Theorem 2.3: empirically, we measured 1.42±0.02 bits per element per factor of false positive rate reduction, closely matching the theoretical 1.44 factor.

4.5 Discussion

The experimental results validate our theoretical framework:

  • Information leakage bounds are tight and achievable in practice

  • Composition properties enable predictable error accumulation

  • Space-time-privacy trade-offs can be tuned for specific applications

Future work should evaluate larger-scale deployments with real document collections to validate the theoretical framework at scale.

5 Security Analysis

5.1 Threat Model

We consider adversaries that may:

  • Observe all communication between the obfuscator and the ESP

  • Monitor access patterns to the secure index

  • Submit malicious queries to learn about the database

  • Analyze temporal correlations in query streams

5.2 Security Guarantees

Our framework provides:

  • Query Privacy: Information leakage bounded by I(Q;R,A)h(α)+δ (Theorem 2.1)

  • Result Privacy: Bernoulli approximation ensures plausible deniability with controlled false positive rate α

  • Access Pattern Privacy: Oblivious access mechanisms limit pattern leakage to at most δ bits

  • Composability: Error accumulation is predictable and bounded (Theorem 2.4)

6 Related Work

Our work builds on and synthesizes results from several research areas:

6.1 Searchable Encryption

The foundational work of Song, Wagner, and Perrig [18] introduced the first practical searchable encryption scheme, enabling keyword searches on encrypted data. Curtmola et al. [7] formalized security definitions for searchable symmetric encryption, distinguishing between adaptive and non-adaptive security models. Subsequent work by Kamara et al. [13] and Cash et al. [5] extended these schemes to support dynamic updates and large-scale databases.

A critical limitation of deterministic searchable encryption is its vulnerability to access pattern leakage. Islam et al. [12] and Cash et al. [4] demonstrated practical attacks exploiting these patterns to recover queries and documents. Our approach addresses this through probabilistic obfuscation, trading exact results for stronger privacy guarantees.

6.2 Probabilistic Data Structures

Bloom [1] introduced the Bloom filter, a space-efficient probabilistic data structure for approximate set membership with one-sided error (false positives but no false negatives). Broder and Mitzenmacher [2] surveyed network applications, while Carter et al. [3] established theoretical foundations for approximate membership testers. Pagh et al. [14] proved space lower bounds showing Bloom filters are near-optimal.

Traditional applications of Bloom filters prioritize space efficiency over privacy. We reinterpret false positives as a privacy feature, providing plausible deniability for search queries and results. Our contribution is recognizing that controlled approximation can simultaneously achieve space efficiency and information-theoretic privacy.

6.3 Oblivious Computation

Goldreich and Ostrovsky [11] introduced Oblivious RAM (ORAM), enabling computation on encrypted data while hiding access patterns. Modern ORAM constructions like Path ORAM [19] and Circuit ORAM [20] achieve polylogarithmic overhead but remain computationally expensive for large-scale search.

Roche et al. [16] explored practical oblivious data structures for specific operations. Our work differs by accepting approximate results to achieve better efficiency. Where ORAM guarantees perfect obliviousness with O(logn) overhead per access, we achieve bounded information leakage with O(1) overhead through probabilistic approximation.

6.4 Information-Theoretic Privacy

Shannon [17] established information theory as the foundation for analyzing secrecy. Cover and Thomas [6] provide comprehensive treatment of entropy, mutual information, and the data processing inequality—tools we employ to quantify privacy leakage.

Differential privacy [8, 9] provides a different privacy model based on indistinguishability of neighboring databases. Recent work on type systems for differential privacy [15, 10] inspired our type-theoretic approach to approximation, though we focus on information-theoretic rather than differential privacy guarantees.

6.5 Our Contributions

Our work synthesizes these threads into a unified framework:

  1. 1.

    We formalize oblivious Bernoulli types combining approximation (Bloom filters) with obliviousness (ORAM-style access hiding), providing dual privacy protection

  2. 2.

    We prove information-theoretic bounds decomposing leakage into access patterns and approximate results, showing both contribute bounded entropy

  3. 3.

    We demonstrate space-optimal constructions achieving theoretical lower bounds while maintaining privacy guarantees

  4. 4.

    We provide experimental validation confirming that theoretical bounds are tight and achievable in practice

The novelty lies not in individual components but in their synthesis: recognizing that probabilistic approximation provides privacy and formalizing this through information-theoretic analysis.

7 Conclusions and Future Work

We presented oblivious Bernoulli types as a unified framework for encrypted search. By combining approximation with obliviousness, we achieve:

  • Information-theoretic privacy bounds with quantifiable leakage

  • Space-optimal constructions matching theoretical lower bounds

  • Natural composition properties for complex queries

  • Predictable error accumulation through multi-step operations

Future directions include:

  • Extending to ranked retrieval and semantic search

  • Optimizing for specific query workloads and access patterns

  • Large-scale evaluation with real document collections

  • Formal verification of security properties

  • Integration with homomorphic encryption for stronger guarantees

The convergence of probabilistic data structures, information theory, and cryptographic techniques opens new possibilities for privacy-preserving information retrieval with provable guarantees.

References

  • [1] B. H. Bloom (1970) Space/time trade-offs in hash coding with allowable errors. Communications of the ACM 13 (7), pp. 422–426. Cited by: §2.3, Theorem 2.3, §6.2.
  • [2] A. Broder and M. Mitzenmacher (2004) Network applications of bloom filters: a survey. Internet Mathematics 1 (4), pp. 485–509. Cited by: §6.2.
  • [3] L. Carter, R. Floyd, J. Gill, G. Markowsky, and M. Wegman (1978) Exact and approximate membership testers. In Proceedings of the Tenth Annual ACM Symposium on Theory of Computing, pp. 59–65. Cited by: §2.4, §2.4, Theorem 2.2, §6.2.
  • [4] D. Cash, P. Grubbs, J. Perry, and T. Ristenpart (2015) Leakage-abuse attacks against searchable encryption. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 668–679. Cited by: §6.1.
  • [5] D. Cash, S. Jarecki, C. Jutla, H. Krawczyk, M. Rosu, and M. Steiner (2013) Dynamic searchable encryption in very-large databases: data structures and implementation. In Network and Distributed System Security Symposium (NDSS), Cited by: §6.1.
  • [6] T. M. Cover and J. A. Thomas (2006) Elements of information theory. 2nd edition, John Wiley & Sons. Cited by: §2.3, §2.3, §6.4.
  • [7] R. Curtmola, J. Garay, S. Kamara, and R. Ostrovsky (2006) Searchable symmetric encryption: improved definitions and efficient constructions. In Proceedings of the 13th ACM Conference on Computer and Communications Security, pp. 79–88. Cited by: §6.1.
  • [8] C. Dwork, F. McSherry, K. Nissim, and A. Smith (2006) Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pp. 265–284. Cited by: §6.4.
  • [9] C. Dwork and A. Roth (2014) The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science 9 (3–4), pp. 211–407. Cited by: §6.4.
  • [10] M. Gaboardi, A. Haeberlen, J. Hsu, A. Narayan, and B. C. Pierce (2013) Linear dependent types for differential privacy. In ACM SIGPLAN Notices, Vol. 48, pp. 357–370. Cited by: §6.4.
  • [11] O. Goldreich and R. Ostrovsky (1996) Software protection and simulation on oblivious rams. Journal of the ACM 43 (3), pp. 431–473. Cited by: §2.3, §6.3.
  • [12] M. S. Islam, M. Kuzu, and M. Kantarcioglu (2012) Access pattern disclosure on searchable encryption: ramification, attack and mitigation. In Network and Distributed System Security Symposium (NDSS), Cited by: §6.1.
  • [13] S. Kamara, C. Papamanthou, and T. Roeder (2012) Dynamic searchable symmetric encryption. In Proceedings of the 2012 ACM Conference on Computer and Communications Security, pp. 965–976. Cited by: §6.1.
  • [14] A. Pagh, R. Pagh, and S. S. Rao (2005) An optimal bloom filter replacement. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 823–829. Cited by: §2.4, Theorem 2.2, §6.2.
  • [15] J. Reed and B. C. Pierce (2010) Distance makes the types grow stronger: a calculus for differential privacy. 45 (9), pp. 157–168. Cited by: §6.4.
  • [16] D. S. Roche, A. J. Aviv, and S. G. Choi (2016) Toward practical oblivious data structures. In IACR International Workshop on Security and Trust Management, pp. 172–188. Cited by: §6.3.
  • [17] C. E. Shannon (1948) A mathematical theory of communication. Vol. 27, Wiley Online Library. Cited by: §6.4.
  • [18] D. X. Song, D. Wagner, and A. Perrig (2000) Practical techniques for searches on encrypted data. In Proceedings of the 2000 IEEE Symposium on Security and Privacy, pp. 44–55. Cited by: §6.1.
  • [19] E. Stefanov, M. Van Dijk, E. Shi, C. Fletcher, L. Ren, X. Yu, and S. Devadas (2013) Path oram: an extremely simple oblivious ram protocol. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, pp. 299–310. Cited by: §6.3.
  • [20] X. S. Wang, T. H. Chan, and E. Shi (2015) Circuit oram: on tightness of the goldreich-ostrovsky lower bound. In Proceedings of the 2015 ACM SIGSAC Conference on Computer and Communications Security, pp. 850–861. Cited by: §6.3.