Bernoulli Sets: A Comprehensive Statistical Framework for Probabilistic Set Membership

Alexander Towell
Southern Illinois University Edwardsville
atowell@siue.edu
Abstract

We present Bernoulli sets, a comprehensive statistical framework for probabilistic set data structures that provides rigorous foundations for approximating set membership queries under uncertainty. Our framework unifies the treatment of fundamental data structures like Bloom filters, Count-Min sketches, and their variants through a principled latent/observed duality that distinguishes between true mathematical sets and their noisy computational approximations. We derive exact distributions and confidence intervals for all standard binary classification measures including positive predictive value (PPV), negative predictive value (NPV), accuracy, and F1-score, providing both closed-form expressions and asymptotic approximations. We introduce interval arithmetic for propagating uncertainty bounds through set operations when error rates are imprecisely known, yielding conservative guarantees for composed operations. For set-theoretic operations, we establish tight bounds on error propagation for union, intersection, complement, and symmetric difference under both independent and correlated error models. Our analysis reveals fundamental tradeoffs between space efficiency and error rates, showing how classical implementations like Bloom filters achieve near-optimal performance within our framework. The theory finds immediate applications in databases, networking, bioinformatics, and distributed systems where space-efficient approximate membership testing is crucial.

1 Introduction

Modern computing systems routinely face the challenge of testing set membership for massive collections where exact representations would be prohibitively expensive. Consider a distributed database checking whether a key exists across billions of records, a network router filtering malicious URLs from a blacklist of millions, or a genomics pipeline searching for k-mers in terabyte-scale sequence databases. In each case, the fundamental question “is element x in set S?” must be answered quickly using limited memory.

Probabilistic set data structures address this challenge by trading perfect accuracy for dramatic space savings. A Bloom filter [1], for instance, can represent a set using just 10 bits per element while maintaining a 1% false positive rate, compared to hundreds of bits required for exact storage of typical keys. This order-of-magnitude compression enables applications that would otherwise be infeasible, from web crawling [2] to bioinformatics applications.

Despite their widespread adoption, probabilistic set data structures have traditionally been analyzed in isolation, with each variant—Bloom filters, counting filters, cuckoo filters, quotient filters—receiving separate treatment. This paper introduces Bernoulli sets, a unified statistical framework that captures the essential behavior of all these structures through a fundamental latent/observed duality.

1.1 The Latent/Observed Duality

At the heart of our framework lies a crucial distinction between two types of sets:

  • Latent sets S: The true mathematical sets that exist conceptually but are not directly accessible computationally.

  • Observed sets S~: The approximate representations we actually compute with, subject to false positives and false negatives.

This duality reflects a fundamental reality of approximate computing: we can never directly access the “true” set S; we can only interact with its noisy observation S~. Every membership query xS becomes a probabilistic test x~S~.

The complete probabilistic relationship between latent and observed sets is captured by a 2|𝒰|×2|𝒰| confusion matrix, where each entry

QA,B=[S~=BS=A]

gives the probability of observing set B when the true set is A. This exponentially-large matrix is generally intractable.

In practice, we work with simplified models. The first-order model corresponds to a binary symmetric channel with a single error parameter ϵ, where membership bits are flipped with uniform probability.

The second-order model allows different rates for false positives (α) and false negatives (β), yielding the familiar 2×2 confusion matrix for individual membership queries:

ObservedxS~xS~LatentxS1ββxSα1α

This second-order model with asymmetric errors is most relevant algorithmically, as real data structures often exhibit different false positive and false negative rates. For example, Bloom filters have β=0 (no false negatives) but α>0 (false positives possible). Other second-order variants allow element-specific error rates, capturing scenarios where different elements have different error probabilities due to hash collisions or load imbalance.

1.2 Contributions

This paper makes the following contributions:

1. **Unified Framework**: We develop Bernoulli sets as a comprehensive model for probabilistic set membership, showing how classical data structures emerge as special cases with particular (α,β) parameters.

2. **Distribution Theory**: We derive exact distributions for all binary classification measures (PPV, NPV, accuracy, F1-score) as functions of the confusion matrix parameters, providing both finite-sample results and asymptotic approximations.

3. **Interval Arithmetic**: We introduce systematic methods for propagating uncertainty when error rates are known only imprecisely, yielding conservative bounds for all derived measures.

4. **Set Operations**: We establish tight bounds on error propagation through union, intersection, complement, and symmetric difference operations, for both independent and correlated error models.

5. **Implementation Analysis**: We analyze space-time tradeoffs for Bloom filters, Count-Min sketches, and other implementations, showing how they achieve near-optimal performance within our framework.

1.3 Organization

Section 2 introduces the formal Bernoulli set model. Section 3 derives distributions for binary classification measures. Section 4 develops interval arithmetic for uncertain error rates. Section 5 analyzes set operations and error propagation. Section 6 examines implementations and applications. Section 7 concludes.

2 The Bernoulli Set Model

We begin by formalizing the notion of approximate set membership through a statistical model that captures the essential uncertainty in probabilistic data structures.

2.1 Basic Definitions

Let 𝒰 denote a universe of possible elements. A set S𝒰 is a collection of distinct elements from this universe. In classical set theory, membership is deterministic: for any x𝒰, either xS or xS.

Definition 2.1 (Bernoulli Set).

A Bernoulli set S~ is a probabilistic representation of a latent set S𝒰 characterized by two error parameters:

  • False positive rate α[0,1]: [xS~xS]=α

  • False negative rate β[0,1]: [xS~xS]=β

The membership test for a Bernoulli set is thus a random function:

memberS~:𝒰{0,1}

where the output follows a Bernoulli distribution conditional on true membership.

2.2 Confusion Matrix Formalism

The complete behavior of a Bernoulli set over universe 𝒰 is specified by an exponentially-large confusion matrix:

Definition 2.2 (Full Confusion Matrix).

For a Bernoulli set S~ approximating latent set S𝒰, the full confusion matrix has dimension 2|𝒰|×2|𝒰| with entries:

QA,B=[S~=BS=A]for all A,B𝒰

This exponential matrix captures all possible transitions from latent sets to observed sets. However, working with such matrices is intractable. We therefore consider two important special cases:

2.2.1 First-Order Bernoulli Sets

Definition 2.3 (First-Order Model - Binary Symmetric Channel).

A first-order Bernoulli set has a single error parameter ϵ, corresponding to a binary symmetric channel where each bit (membership indicator) is flipped with probability ϵ:

[xS~S]={1ϵif xSϵif xS

This models uniform noise where false positives and false negatives occur at the same rate.

Equivalently, if we represent the set S as a bit string where bit i indicates membership of element xi, then the observed set S~ is the result of transmitting this bit string through a binary symmetric channel with crossover probability ϵ.

2.2.2 Second-Order Bernoulli Sets

Second-order models allow different error rates for false positives and false negatives. The most common variant distinguishes these two error types:

Definition 2.4 (Second-Order Model - Asymmetric Errors).

A second-order Bernoulli set has separate parameters for false positive rate α and false negative rate β:

[xS~S]={1βif xSαif xS

where αβ in general.

This is the most algorithmically relevant model, as many data structures naturally exhibit asymmetric errors (e.g., Bloom filters with β=0, α>0).

Remark 2.5 (Other Second-Order Variants).

Other second-order models include:

  • Element-specific rates: Different elements have different error probabilities αx, βx, as occurs when hash functions create non-uniform distributions

  • Size-dependent rates: Error rates depend on |S|, as in structures where collision probability increases with load

  • Time-varying rates: Error rates change over time, as in aging or adaptive structures

For the standard second-order model with uniform asymmetric errors, each membership query has the familiar 2×2 confusion matrix:

Qelement=([xS~|xS][xS~|xS][xS~|xS][xS~|xS])=(1ββα1α)

2.3 Special Cases

Several important special cases arise from particular choices of (α,β):

Example 2.6 (One-sided Error).
  • Bloom filter: β=0 (no false negatives), α>0 (false positives possible)

  • Lossy counter: α=0 (no false positives), β>0 (undercounting possible)

Example 2.7 (Degenerate Cases).
  • Exact set: α=β=0 (no errors)

  • Universal set: α=1,β=0 (always returns true)

  • Empty set: α=0,β=1 (always returns false)

  • Random set: α=β=0.5 (maximally uncertain)

2.4 Independence Assumptions

A key assumption in our base model is that membership tests are independent across elements:

Assumption 2.8 (Independence).

For distinct elements x,y𝒰, the events {xS~}
and {yS~} are conditionally independent given the latent set S.

This assumption, while not always perfectly satisfied in practice (e.g., hash collisions in Bloom filters create dependencies), provides a tractable starting point for analysis. We relax this assumption when analyzing specific implementations.

3 Binary Classification Measures

Given a Bernoulli set with parameters (α,β), we now derive the distributions of standard binary classification measures. These measures quantify different aspects of approximation quality and are crucial for understanding the practical implications of using probabilistic data structures.

3.1 Fundamental Counts

Consider a latent set S with |S|=p positive elements and |𝒰S|=n negative elements. When we test these elements against a Bernoulli set S~, we obtain four random counts:

TP Binomial(p,1β)(True Positives) (1)
FN Binomial(p,β)(False Negatives) (2)
FP Binomial(n,α)(False Positives) (3)
TN Binomial(n,1α)(True Negatives) (4)

These counts form the basis for all derived measures.

3.2 Positive Predictive Value (PPV)

The positive predictive value measures the probability that an element is truly in S given that it tests positive in S~.

Theorem 3.1 (PPV Distribution).

The positive predictive value is the random variable:

PPV=TPTP+FP

with expectation approximately:

𝔼[PPV]t¯pt¯p+f¯p+t¯pσfp2f¯pσtp2(t¯p+f¯p)3

where t¯p=p(1β), f¯p=nα, σtp2=pβ(1β), σfp2=nα(1α).

Proof.

Using the delta method for the ratio of random variables, we expand PPV around its mean values:

PPV=g(TP,FP)=TPTP+FP

The first-order Taylor expansion gives:

𝔼[PPV]g(𝔼[TP],𝔼[FP])+12tr(HΣ)

where H is the Hessian matrix and Σ is the covariance matrix. Since TP and FP are independent, Σ is diagonal, yielding the stated result. ∎

Corollary 3.2 (PPV Asymptotics).

As n with fixed p and α>0:

PPV0almost surely

This reveals a fundamental limitation: without controlling false positives (α), PPV degrades as the negative population grows.

3.3 Negative Predictive Value (NPV)

The negative predictive value measures the probability that an element is truly not in S given that it tests negative in S~.

Theorem 3.3 (NPV Distribution).

The negative predictive value is:

NPV=TNTN+FN

with expectation approximately:

𝔼[NPV]t¯nt¯n+f¯n+t¯nσfn2f¯nσtn2(t¯n+f¯n)3

where t¯n=n(1α), f¯n=pβ, σtn2=nα(1α), σfn2=pβ(1β).

3.4 Accuracy

Accuracy measures the overall proportion of correct classifications.

Theorem 3.4 (Accuracy Distribution).

The accuracy is:

ACC=TP+TNp+n

with expectation and variance:

𝔼[ACC] =λ(1β)+(1λ)(1α) (5)
Var[ACC] =λβ(1β)+(1λ)α(1α)p+n (6)

where λ=p/(p+n) is the prevalence of positive elements.

3.5 F1-Score

The F1-score is the harmonic mean of precision (PPV) and recall (TPR).

Theorem 3.5 (F1-Score).

The F1-score is:

F1=2TP2TP+FP+FN

with expectation approximately:

𝔼[F1]2p(1β)2p(1β)+nα+pβ

3.6 Youden’s J Statistic

Youden’s J statistic measures the overall discriminative ability.

Theorem 3.6 (Youden’s J).

Youden’s J statistic is:

J=TPRFPR=TPpFPn

with expectation:

𝔼[J]=(1β)α

3.7 Confidence Intervals

For large p and n, the central limit theorem provides asymptotic confidence intervals.

Theorem 3.7 (Asymptotic Confidence Intervals).

For confidence level 1γ, asymptotic confidence intervals are:

FPR α±zγ/2α(1α)n (7)
TPR (1β)±zγ/2β(1β)p (8)

where zγ/2 is the (1γ/2) quantile of the standard normal distribution.

4 Interval Arithmetic for Error Bounds

In practice, error rates α and β are often known only imprecisely. We develop interval arithmetic to propagate this uncertainty through all derived measures.

4.1 Interval Representation

Definition 4.1 (Interval Error Rates).

When error rates are uncertain, we represent them as intervals:

α [αmin,αmax] (9)
β [βmin,βmax] (10)

where 0αminαmax1 and similarly for β.

4.2 Basic Interval Operations

For intervals [a,b] and [c,d]:

[a,b]+[c,d] =[a+c,b+d] (11)
[a,b][c,d] =[min(ac,ad,bc,bd),max(ac,ad,bc,bd)] (12)
1[a,b] =[1b,1a] (13)

4.3 Propagation Through Measures

Theorem 4.2 (PPV Interval).

Given α[αmin,αmax] and β[βmin,βmax]:

PPV[p(1βmax)p(1βmax)+nαmax,p(1βmin)p(1βmin)+nαmin]
Proof.

PPV is monotonically increasing in (1β) and decreasing in α. The minimum occurs at maximum β and α; the maximum occurs at minimum β and α. ∎

Theorem 4.3 (Accuracy Interval).

Given uncertain prevalence
λ[λmin,λmax], the accuracy lies in:

ACC[ minλ,α,βacc(λ,α,β),
maxλ,α,βacc(λ,α,β)] (14)

where the optimization considers the dependency structure of the accuracy formula.

4.4 Conservative Bounds

When dependencies between parameters are unknown, we adopt conservative (worst-case) bounds:

Definition 4.4 (Conservative Interval).

The conservative interval for a measure M(α,β,λ) is:

Mconservative=[infθΘM(θ),supθΘM(θ)]

where Θ is the Cartesian product of all parameter intervals.

4.5 Interval Width and Uncertainty

The width of an interval quantifies our uncertainty:

Definition 4.5 (Uncertainty Measure).

For interval I=[a,b], the uncertainty is:

U(I)=ba
Proposition 4.6 (Uncertainty Propagation).

For PPV with fixed p and n:

U(PPVinterval)pU(β)+nU(α)p(1βmax)+nαmin

5 Set Operations and Error Propagation

We now analyze how errors propagate through set-theoretic operations on Bernoulli sets.

5.1 Union Operation

Theorem 5.1 (Union Error Rates).

For independent Bernoulli sets A~ and B~ with parameters (αA,βA) and (αB,βB):

αAB =αA+αBαAαB (15)
βAB =βAβB (16)
Proof.

For false positives: an element xAB appears in AB~ if it appears in A~ or B~:

[xAB~|xAB]=1(1αA)(1αB)=αA+αBαAαB

For false negatives: an element xAB is missing from AB~ only if it’s missing from both:

[xAB~|xAB]=βAβB

when x is in both sets. The general case requires considering which set(s) contain x. ∎

Corollary 5.2 (Union Bounds).

For any union:

αAB αA+αB (17)
βAB min(βA,βB) (18)

5.2 Intersection Operation

Theorem 5.3 (Intersection Error Rates).

For independent Bernoulli sets:

αAB =αAαB (19)
βAB =βA+βBβAβB (20)
Proof.

For false positives: xAB appears in AB~ only if it appears in both observed sets:

[xAB~|xAB]=αAαB

For false negatives: xAB is missing if it’s missing from either observed set:

[xAB~|xAB]=1(1βA)(1βB)

5.3 Complement Operation

Theorem 5.4 (Complement Error Rates).

For Bernoulli set A~ with parameters (α,β):

𝒰A~ has parameters (β,α)
Proof.

The complement swaps the roles of positives and negatives, thus swapping false positive and false negative rates. ∎

5.4 Symmetric Difference

Theorem 5.5 (Symmetric Difference).

For AB=(AB)(BA):

αAB =αA(1αB)+αB(1αA) (21)
βAB =1(1βA)(1βB)(2(1βA)(1βB)) (22)

5.5 Correlated Errors

When errors are correlated (e.g., due to shared hash functions), the independence assumptions break down.

Definition 5.6 (Correlation Coefficient).

The correlation between membership tests is:

ρ=Cov[XA,XB]Var[XA]Var[XB]

where XA and XB are indicator variables for membership in A~ and B~.

Theorem 5.7 (Correlated Union).

With correlation ρ:

αAB=αA+αBαAαBραA(1αA)αB(1αB)

5.6 Composition of Operations

Theorem 5.8 (Error Accumulation).

For k composed operations, error rates grow as:

αk 1(1α)kkα for small α (23)
βk 1(1β)kkβ for small β (24)

This linear growth in error rates limits the depth of practical compositions.

6 Implementation and Applications

We now examine how classical probabilistic data structures implement
Bernoulli sets and analyze their space-time tradeoffs.

6.1 Bloom Filters

The Bloom filter is the canonical implementation of a Bernoulli set with β=0.

Theorem 6.1 (Bloom Filter Parameters).

A Bloom filter with m bits, k hash functions, and n elements achieves:

α=(1ekn/m)k

Optimal k=(m/n)ln2 yields α=2k0.6185m/n.

Proof.

The probability a specific bit is not set by a specific hash of a specific element is 11/m. After inserting n elements with k hashes each:

[bit = 0]=(11m)knekn/m

A false positive occurs when all k bits for a non-member are set:

α=(1ekn/m)k

Minimizing with respect to k yields the stated optimum. ∎

Corollary 6.2 (Space Complexity).

To achieve false positive rate α requires:

m=nlnα(ln2)21.44nlog2(1/α) bits

6.2 Counting Bloom Filters

Counting Bloom filters extend standard Bloom filters to support deletions and multiplicity queries.

Theorem 6.3 (Counting Filter Overflow).

With c-bit counters and load factor λ=kn/m:

[overflow]m[Poisson(λ)>2c1]

6.3 Count-Min Sketch

The Count-Min sketch approximates multiset membership with one-sided error.

Theorem 6.4 (Count-Min Error).

With width w=e/ϵ and depth d=ln(1/δ):

[|c~(x)c(x)|ϵc1]1δ

where c~(x) is the estimated count and c(x) is the true count.

6.4 Cuckoo Filters

Cuckoo filters provide an alternative to Bloom filters with better cache locality and deletion support.

Theorem 6.5 (Cuckoo Filter Parameters).

A cuckoo filter with load factor αload=0.95 and fingerprint size f bits achieves:

α8n2fb

where b is the bucket size (typically 4).

6.5 Applications

6.5.1 Database Systems

Bernoulli sets accelerate database operations:

  • Join optimization: Pre-filter join candidates using Bloom filters

  • Duplicate detection: Identify potential duplicates in large datasets

  • Query routing: Route queries to relevant shards in distributed
    databases

Example 6.6 (Distributed Join).

Consider joining tables R and S across network. Send Bloom filter BR of R’s join keys to S’s node. Filter S locally: only send tuples where keyBR. Reduces network traffic by factor
(1α)|S|.

6.5.2 Network Systems

Network applications leverage space efficiency:

  • Packet filtering: Block malicious IPs or URLs

  • Flow monitoring: Track unique flows in high-speed networks

  • Content routing: Forward requests in content delivery networks

Example 6.7 (DDoS Mitigation).

Maintain Bloom filter of legitimate source IPs seen recently. During attack, drop packets from IPs not in filter. False positive rate α determines collateral
damage to new legitimate users.

6.5.3 Bioinformatics

Genomic applications handle massive sequence data:

  • K-mer indexing: Test presence of sequence fragments

  • Read classification: Assign sequencing reads to reference genomes

  • Variant calling: Identify genetic variations efficiently

Example 6.8 (K-mer Membership).

Human genome has 3×109 base pairs, yielding 3×109 31-mers. Exact storage requires >100GB. Bloom filter with α=0.01 uses 15GB, enabling in-memory processing on commodity hardware.

6.6 Space-Time Tradeoffs

Theorem 6.9 (Information-Theoretic Lower Bound).

Any data structure answering membership queries with false positive rate α and no false negatives requires at least:

nlog2(1/α)O(n)

bits in the worst case.

Proof.

There are (|𝒰|n) possible sets of size n. To distinguish them with error rate α requires log2(|𝒰|n)(1α) bits. For large |𝒰|, this approaches nlog2(1/α). ∎

Corollary 6.10 (Bloom Filter Optimality).

Bloom filters achieve within 44% of the information-theoretic lower bound:

mBloommlower=1.44nlog2(1/α)nlog2(1/α)=1.44

7 Case Studies

7.1 Case Study 1: Web Crawling

Web crawlers must track billions of visited URLs to avoid redundant fetches. Storing exact URLs requires 50-100 bytes per URL. For 10 billion URLs, this demands 500GB-1TB of RAM.

Solution: Use Bloom filter with α=0.001. Space requirement: 1.44×1010×log2(1000)17 GB. False positive rate means 0.1% of new URLs incorrectly skipped—acceptable for most crawlers.

Optimization: Use rolling Bloom filters with time-based eviction for crawl freshness.

7.2 Case Study 2: Distributed Caching

Content delivery networks (CDNs) need to quickly determine which edge server likely has cached content.

Solution: Each edge server maintains a Bloom filter of cached objects, shared with routing layer. Router checks filters to identify servers likely to have content.

Analysis: With n=106 cached objects per server and α=0.01, each filter uses 1.2 MB. For 1000 servers, routing table uses 1.2 GB total. False positives cause occasional unnecessary forwarding, but save significant routing table space.

7.3 Case Study 3: Genomic Analysis

Metagenomics pipelines classify DNA reads by testing against reference databases of known organisms.

Challenge: Human microbiome database contains >1012 unique k-mers across thousands of species.

Solution: Build species-specific Bloom filters. Test each read’s k-mers against filters to identify likely source organisms.

Performance: With α=0.001 per k-mer and 100 k-mers per read, read-level false positive rate 1(10.001)1000.095. Confirmatory alignment validates classifications.

8 Conclusion

This paper presented Bernoulli sets as a comprehensive statistical framework for probabilistic set membership. Our key contributions include:

1. **Unified Theory**: We showed how the latent/observed duality provides a principled foundation for understanding all probabilistic set data structures through their confusion matrices.

2. **Rigorous Analysis**: We derived exact distributions and confidence intervals for all standard performance measures, providing both theoretical insights and practical formulas for system designers.

3. **Uncertainty Quantification**: Our interval arithmetic framework enables robust analysis when error rates are imprecisely known, yielding conservative guarantees for risk-averse applications.

4. **Operational Calculus**: We established tight bounds on error propagation through set operations, revealing fundamental limits on composability.

5. **Practical Impact**: We demonstrated how classical implementations achieve near-optimal space-time tradeoffs within our framework, validating decades of engineering practice.

8.1 Future Directions

Several promising directions extend this work:

Adaptive Error Rates: Develop Bernoulli sets that adapt their error rates based on access patterns, allocating lower error rates to frequently queried elements.

Learned Indexes: Integrate machine learning to predict membership, using Bernoulli sets as a theoretical framework for learned data structures.

Distributed Protocols: Extend the framework to distributed settings where set representations must be synchronized across nodes with bandwidth constraints.

Privacy-Preserving Variants: Combine with differential privacy to create data structures that are both space-efficient and privacy-preserving.

Higher-Order Types: Generalize beyond sets to Bernoulli relations, functions, and other higher-order types with appropriate confusion matrices.

The Bernoulli set framework provides a solid theoretical foundation for approximate membership testing, unifying disparate techniques under a common statistical model. As data volumes continue to grow exponentially, such principled approaches to approximation become increasingly vital for building scalable systems.

8.2 Acknowledgments

[Acknowledgments would appear here in final version]

References

  • [1] B. H. Bloom (1970) Space/time trade-offs in hash coding with allowable errors. Communications of the ACM 13 (7), pp. 422–426. Cited by: §1.
  • [2] A. Broder and M. Mitzenmacher (2004) Network applications of bloom filters: a survey. Internet Mathematics 1 (4), pp. 485–509. Cited by: §1.

Appendix A Proof Details

A.1 Delta Method for PPV

The delta method approximates the distribution of g(X) where X is a random vector and g is a differentiable function.

For PPV=g(TP,FP)=TP/(TP+FP):

g =(FP(TP+FP)2,TP(TP+FP)2) (25)
Hg =(2FP(TP+FP)3TPFP(TP+FP)3TPFP(TP+FP)32TP(TP+FP)3) (26)

The second-order approximation:

𝔼[g(X)]g(μ)+12tr(Hg(μ)Σ)

With TP and FP independent:

Σ=(pβ(1β)00nα(1α))

Substituting and simplifying yields the stated formula.

A.2 Information-Theoretic Bound Proof

Consider the problem of representing any subset S𝒰 with |S|=n such that membership queries have false positive rate at most α.

The number of possible sets is (|𝒰|n). To represent one with error probability α, we need to distinguish it from the (|𝒰|n)1 others.

For each wrong set S, the probability of error on a random query is at least |SS|/|𝒰|. To achieve error rate α, we need:

|SS||𝒰|1α

By counting arguments, this requires:

log2(|𝒰|n)nlog2(|𝒰|/n)nlog2(1/α)

for optimal n/|𝒰|α.