Data Structures – How Do Scalable Bloom Filters Work?

data structureshashing

I was reading up on scalable bloom filters and could not understand how each time a constituent bloom filters fills up, a new bloom filter with larger size is added.

The elements that contributed to the set bits in the initially created filters cannot be looked up for presence. Perhaps I am wrong in my understanding of this?

I do understand basic bloom filters. However, I cannot wrap my head around dynamic bloom filters.

Best Answer

Let me try to give this a shot to see how much I can butcher it. :-)

So, to start off, you need to be able to create a regular bloom filter that allows a finite number of elements with a maximum probability of a false positive. The addition of these features to your basic filter are required before attempting to build a scalable implementation.

Before we try to control and optimize what the probability is, lets figure out what the probability is for a given bloom filter size.

First we split up the bitfield by how many hash functions we have (total number of bits / number of hash functions = slices) to get k slices of bits which represent each hash function so every element is always described by k bits.

If you increase the number of slices or the number of bits per slice, the probability of false positives will decrease.

It also follows that as elements are added, more bits are set to 1, so false positives increase. We refer to this as the "fill ratio" of each slice.

When the filter holds a large amount of data, we can assume that the probability of false positives for this filter is the fill ratio raised to the number of slices (If we were to actually count the bits instead of using a ratio, this simplifies into a permutation with repetition problem).

So, how do we figure out how to pick a probability of false positives in a bloom filter? We can modify the number of slices (which will affect the fill ratio).

To figure out how many slices we should have, we start off with figuring out the optimal fill ratio for a slice. Since the fill ratio is determined by the number of bits in a slice which are 1 versus the number of bits which are 0, we can determine that each bit will remain unset with probability of (100% - (1 / bits in a slice)). Since we're going to have multiple items inserted, we have another permutation with reputation problem and we expand things out to the expected fill ratio, which is (100% - ((100% - (1 / bits in a slice)) ^ "elements inserted")). Well, it turns out that this is very similar to another equation. In the paper, they relate the fill ratio to another equation so it fits nicely into a taylor series (1-e^(-n/m)). After a bit of futzing with this, it turns out that the optimal fill ratio is always about 50%, regardless of any of the variables that you change.

So, since the probability of a filter is the fill ratio raised to the number of slices, we can fill in 50% and get P=(50%)^k or k=log_2(1/P). We can then use this function to compute the number of slices we should generate for a given filter in the list of filters for a scalable bloom filter.

    def slices_count(false_positive_probability):
        return math.ceil(math.log(1 / false_positive_probability, 2))

Edit: After writing this, I came across a mention of the "fifty-percent rule" when reading up on buddy system based dynamic memory allocation in TAoCP Vol 1, pp 442-445 with a much cleaner reasoning versus fitting the curve to (1-e^(-n/m)). Knuth also references a paper "The fifty percent rule revisited" with a bit of background on the concept (pdf available here).

Related Topic