Algorithm to detect if a word is spelled correctly

algorithms

I am trying to develop a javascript spellchecker that doesn't use a dictionary, and can correctly, given a single word, detect if a word is spelled correctly or not. Right now, I just have a list of sub strings that never occur within words, and if the word contains one of those sub strings, I count it as misspelled. For example, I would have a substring "lll", and if a word contains "lll" it would be counted as misspelled (such as "I'lll").

However, I'm finding this doesn't work as well as expected. Most misspelled words seem to involve letters in the wrong order, or words that don't follow common rules. The above approach doesn't work for either of these issues. For example, there is no good substring for the spelling "accidant".

I'm looking for a more effective method of determining if a word is probably misspelled or not, ideally something that solves the issues of letters in incorrect order and keys near the correct letter on a keyboard accidentally being pressed (but solutions to other common causes of misspellings are fine).

This is english-only, so it doesn't need to work with other languages.
Also, false positives are a much larger problem for me than false negatives, so I would prefer to err on the side of saying words are spelled correctly when in fact they are not.

Best Answer

Since I'm working on a similar problem myself, I can provide some guidance.

The quickest way I've found to find errors (but not necessarily correct them) is to use n-gram searches. You can store these in an array which is the nth-power of the alphabet size. Given an array @words that includes every single word in a corpus of texts from your language and an trigram (n-gram of three elements):

my %ngrams;
my $ngramSize = 3;
for my $word (@words) {
    next if length($word) < $ngramSize;
    $trigrams{substr($word,$_,$ngramSize)}++ for (0..length($word) -$ngramSize);
}

You would probably want to normalize the data in some way, and then you can store it more efficiently. For example, you could take the median occurrence count and set that to be 255, clipping any values higher, and then proportioning out anything less). That would let you store, for instance, a rough English spell checker using trigrams in 17K, or even as little as 2K if you're willing to go for a black-or-white good/bad trigram (and since most trigrams will probably not exist, you can probably perform even further compression).

Because that would load very quickly, you could use that to quickly generate candidates with 90% accuracy and then, once a full and proper spell checker is downloaded, use that, prioritizing the likely misspelled ones before checking the likely correct ones. If you're expecting the user to use your site regularly, you can also save the dictionary to local storage for virtually instant recall, rather than have them need to download it every single time.

But English spelling, with our incredibly irregular spelling and constant importing of words without adaptation, absolutely requires a dictionary (although, because we are primarily an analytical language, we actually can store all of the dictionary in memory unlike highly inflected or polysynthetic languages)

Related Topic