Fuzzy matching allows you to identify non-exact matches of your target item. It is the foundation stone of many search engine frameworks and one of the main reasons why you can get relevant search results even if you have a typo in your query or a different verbal tense.

As you might expect, there are many algorithms that can be used for fuzzy searching on text, but virtually all search engine frameworks (including bleve) use primarily the Levenshtein Distance for fuzzy string matching:

 

Levenshtein Distance

Also known as Edit Distance, it is the number of transformations (deletions, insertions, or substitutions) required to transform a source string into the target one.For example, if the target term is “book” and the source is “back”, you will need to change the first “o” to “a” and the second “o” to “c”, which will give us a Levenshtein Distance of 2.

It is a so simple algorithm that some companies actually ask you to implement it during code interviews (You can find Levenshtein implementations in JavaScript, Kotlin, Java, and many others here).

 

Additionally, some frameworks also support the Damerau-Levenshtein distance:

 

Damerau-Levenshtein distance

It is an extension to Levenshtein Distance, allowing one extra operation: Transposition of two adjacent characters:

Ex: TSAR to STAR

Damerau-Levenshtein distance = 1  (Switching S and T positions cost only one operation)

Levenshtein distance = 2  (Replace S by T and T by S)

 

 

Fuzzy matching and relevance

 

Fuzzy matching has one big side effect; it messes up with relevance. Although Damerau-Levenshtein is an algorithm that considers most of the common user’s misspellings, it also can include a significantly the number of false positives, especially when we are using a language with an average of just 5 letters per word, such as English. That is why most of the search engine frameworks prefer to stick with Levenshtein distance. Let’s see a real example of it:

First, we are going to use this movie catalog dataset. I highly recommend it if you want to play with full-text search. Then, let’s search for movies with “book” in the title. A simple code would look like the following:

The code above will bring the following results:

 

By default, the results are case-insensitive, but you can easily change this behavior by creating new indexes with different analyzers.

Now, let’s add a fuzzy matching capability to our query by setting fuzziness as 1 (Levenshtein distance 1), which means that “book” and “look” will have the same relevance.

And here is the result:

 

Now, the movie called “Hook” is the very first search result, which might not be exactly what the user is expecting in a search for “Book”.

 

How to minimize false positives during fuzzy lookups

 

 In an ideal world, users would never make any typos while searching for something. However, that is not the world we live in, and if you want your users to have a pleasant experience, you got to handle at least an edit distance of 1. Therefore, the real question is: How can we make fuzzy string matching while minimizing relevance loss?

We can take advantage of one characteristic of most search engine frameworks: A match with a lower edit distance will usually score higher. That characteristic allows us to combine those two queries with different fuzziness levels into one:

Here is the result of the query above:

 

As you can see, this result is much closer to what the user might expect. Note that we are using a class called DisjunctionQuery now, disjunctions are an equivalent to the “OR” operator in SQL.

What else could we improve to reduce the negative side effect of fuzzy matching? Let’s reanalyze our problem to understand if it needs further improvement:

We already know that fuzzy lookups can produce some unexpected results (e.g. Book -> Look, Hook). However, a single term search is usually a terrible query, as it barely gives us a hint of what exactly the user is trying to accomplish.

Even google does not know exactly what I’m looking for when I search for “table”:

google search result for table

So, what is the average length of keywords in a search query? To answer this question, I will show a graph from Rand Fishkin’s 2016 presentation. (He is one of the most famous gurus in the SEO world)

keyword length on search queries

 

According to the graph above, ~80% of the search queries have 2 or more keywords, so let’s try to search for the movie “Black Book” using fuzziness 1:

 

Result:

Not bad. We got the movie we were searching for as the first result. However, a disjunction query would still bring a better set of results.

But still, looks like we have a new nice property here; the side effect of fuzziness matching slightly decreases as the number of keywords increases. A search for “Black Book” with fuzziness 1 can still bring results like back look or lack cook (some combinations with edit distance 1), but these are unlikely to be real movie titles.

A search for “book eli” with fuzziness 2 would still bring it as the third result:

 

 

However, as the average English word is 5 letters long, I would NOT recommend using an edit distance bigger than 2 unless the user is searching for long words that are easy to misspell, like “Schwarzenegger” for instance (at least for non-Germans or non-Austrians).

 

Conclusion

 

In this article, we discussed fuzziness matching and how to overcome its major side effect without messing up with its relevance. Mind that fuzzy matching is just one of the many features which you should take advantage of while implementing a relevant and user-friendly search. We are going to discuss some of them during this series: N-Grams, Stopwords, Steeming, Shingle, Elision. Etc.

Check out also the Part 1 and Part 2 of this series.

In the meantime, if you have any questions, tweet me at @deniswsrosa.

Posted by Denis Rosa, Developer Advocate, Couchbase

Denis Rosa is a Developer Advocate for Couchbase and lives in Munich - Germany. He has a solid experience as a software engineer and speaks fluently Java, Python, Scala and Javascript. Denis likes to write about search, Big Data, AI, Microservices and everything else that would help developers to make a beautiful, faster, stable and scalable app.

One Comment

  1. Well, I can only see technicalities here regarding the fuzzy matching algorithm. There is no denying that Levenshtein Distance could be a potential fuzzy matching algorithm, but business executives look for smarter ways. Only tech experts can understand what this algorithm is all about. There is another more efficient and effective fuzzy matching software like the one offered by data ladder. Data ladder is a unique service provider, which helps business users across the globe with its user-friendly software. DataMatch Enterprise is a fast and accurate fuzzy matching software for today’s business executives. It begins from using semantic technology for transforming records to matching through fuzzy matching algorithm.

Leave a reply