Let me try to give this a shot to see how much I can butcher it. :-)
So, to start off, you need to be able to create a regular bloom filter that allows a finite number of elements with a maximum probability of a false positive. The addition of these features to your basic filter are required before attempting to build a scalable implementation.
Before we try to control and optimize what the probability is, lets figure out what the probability is for a given bloom filter size.
First we split up the bitfield by how many hash functions we have (total number of bits / number of hash functions = slices) to get k slices of bits which represent each hash function so every element is always described by k bits.
If you increase the number of slices or the number of bits per slice, the probability of false positives will decrease.
It also follows that as elements are added, more bits are set to 1, so false positives increase. We refer to this as the "fill ratio" of each slice.
When the filter holds a large amount of data, we can assume that the probability of false positives for this filter is the fill ratio raised to the number of slices (If we were to actually count the bits instead of using a ratio, this simplifies into a permutation with repetition problem).
So, how do we figure out how to pick a probability of false positives in a bloom filter? We can modify the number of slices (which will affect the fill ratio).
To figure out how many slices we should have, we start off with figuring out the optimal fill ratio for a slice. Since the fill ratio is determined by the number of bits in a slice which are 1 versus the number of bits which are 0, we can determine that each bit will remain unset with probability of (100% - (1 / bits in a slice)). Since we're going to have multiple items inserted, we have another permutation with reputation problem and we expand things out to the expected fill ratio, which is (100% - ((100% - (1 / bits in a slice)) ^ "elements inserted")). Well, it turns out that this is very similar to another equation. In the paper, they relate the fill ratio to another equation so it fits nicely into a taylor series (1-e^(-n/m)). After a bit of futzing with this, it turns out that the optimal fill ratio is always about 50%, regardless of any of the variables that you change.
So, since the probability of a filter is the fill ratio raised to the number of slices, we can fill in 50% and get P=(50%)^k or k=log_2(1/P). We can then use this function to compute the number of slices we should generate for a given filter in the list of filters for a scalable bloom filter.
def slices_count(false_positive_probability):
return math.ceil(math.log(1 / false_positive_probability, 2))
Edit: After writing this, I came across a mention of the "fifty-percent rule" when reading up on buddy system based dynamic memory allocation in TAoCP Vol 1, pp 442-445 with a much cleaner reasoning versus fitting the curve to (1-e^(-n/m)). Knuth also references a paper "The fifty percent rule revisited" with a bit of background on the concept (pdf available here).
I am currently undertaking a similar (although more generic) project with my lab. As such, I want to warn you that this feature is a rabbit hole that can get very complicated very quickly. The first thing you need to do is think about your users and your goals and decide what's "good enough" or you will spend a lot of time developing a feature that, in the grand scheme of your site, might not be that important.
Basically you want some sort of information retrieval system. Think a mini-Google but not nearly as complex. First you need to decide how you will define similarity between articles (a metric). This will be handled in your preprocessing. Generally your actually comparison will be the same no matter what your metric (typically cossine similarity).
Defining a Metric
First, you need to decide what makes articles similar. There are two main approaches: looking for similarities in article topics or looking for similarities in article text. Topics will give better results but text is easier to implement.
Similarity by Topic
As mentioned several times, the easiest way to implement this system is allowing specify topics through author-specified tags. You would then search for articles with large overlaps in tags. If the tags are numerous and fine grained enough then this should give the best results.
The disadvantage is that you need to put a lot of thought into what the tags are to ensure you have coverage, clarity, and a lack of redundancy. If you take the Stack Exchange approach of letting users create their own tags then you can increase coverage but you need to moderate the tags to maintain the clarity/lack of redundancy. However, the greatest drawback of this approach is that you are trusting users to appropriately tag their posts. SE gets around this problem by letting other users edit and make suggestions for the tags.
You can get even better results if you tag topics at the sentence or paragraph level. It gives a better representation of which topics are more important in an article but it's more work. As the tagging scope gets smaller, the complexity of this task becomes exponentially more difficult.
What about an automated solution to take the work load off the users? Automatic Topic Identification is something that has been studied a lot. I'm not an expert at it but I suggest you read a few papers and decide if you feel these solutions are mature enough to give reliable results. My concern with this approach is that since you admit your domain is niche you might have a hard time finding an out-of-the-box solution and will need to implement the topic identifier yourself. At that point you might as well just do text-based similarity because it will be much easier and out-of-the-box solutions exist.
Similarity by Text
In this approach instead of comparing topic tags, you compare the actual words in the article. The advantage is that the preprocessing is much easier to accomplish. The disadvantage is that it assumes that similar text means a similar topic, which is not always the case.
Making it Work
In general, whichever metric you choose you will end up with a vector representing your articles. Maybe the vector is of word frequencies or of topic tags. You now need to compare the vectors for your articles to see which are similar.
The Stanford Natural Language Processing Course offered on coursera.com is a good introduction to Information Retrieval (specifically the Week 7 lectures). Keep in mind that the solutions presented in those lectures are relatively basic, but it's a good start.
I would heavily suggest trying to find an out-of-the-box implementation here. Failing that, using a toolkit like Apache Lucene will greatly simplify your development.
Now you need to test out a bunch of term weighting algorithms and see which one gives the best results for your data. TREC is a competition to find better and better weighting algorithms. Check the proceedings on their website to find discussions of newer, more accurate weighting algorithms.
Best Answer
The nice thing about bloom filters is that their space requirement can be scaled arbitrarily, with the cost of more false positives as the size is decreased.
If you want no false positives whatsoever, you can't take probabilistic shortcuts and will not be able to reduce space requirements significantly (which isn't that much of an issue as storing each of your strings sequentially would only use up a few hundred MB per set).
There two important representations of string sets:
These data structures allow parallel access, and each access is O(1) with respect to the number of items in the set.
Tries have the advantage that common prefixes of strings are shared, thus potentially reducing space requirements below that of sequential storage. A read-only trie can also be compressed further.