Machine Learning – How Machine Learning Is Incorporated into Search Engine Design

language-agnosticlucenemachine learningsearchsearch-engine

I am currently building a small in-house search engine based on Apache Lucene. Its purpose is simple – based on some keywords, it will suggest some articles written internally within our company. I am using a fairly standard TF-IDF scoring as a base metric and built my own scoring mechanism on top it. All of these seem to be working excellent except for some corner cases where the ranking seems messed up.

So what I am planning on doing is to add a small Relevant/Not Relevant link to the search results page so that users can click on one of those depending on their perception of whether that result should have been included in the first place.

My Idea

  1. Treat these Relevant/Not Relevant as labels and create a training data.
  2. Use this data to train a classifier (such as SVM)
  3. Incorporate this model into the search engine i.e., every new result will pass through the classifier and will be assigned a label on whether it is relevant or not.

This approach seems intuitive to me but am not sure whether it will work in practice. I have two specific questions:

  1. What all features should I extract?
  2. Is there a better way to integrate the machine learning component into the search engine? My final goal is to "learn" the ranking function based on both business logic as well as user feedback.

Best Answer

(1) What all features should I extract?

First, realize that you're not classifying documents. You're classifying (document, query) pairs, so you should extract features that express how well they match.

The standard approach in learning to rank is to run the query against various search engine setups (e.g. tf-idf, BM-25, etc.) and then train a model on the similarity scores, but for a small, domain-specific SE, you could have features such as

  • For each term, a boolean that indicates whether the term occurs in both the query and the document. Or maybe not a boolean, but the tf-idf weights of those query terms that actually occur in the document.
  • Various overlap metrics such as Jaccard or Tanimoto.

(2) Is there a better way to integrate the machine learning component into the search engine? My final goal is to "learn" the ranking function based on both business logic as well as user feedback.

This is a very broad question, and the answer depends on how much effort you want to put in. The first improvement that comes to mind is that you should use not the binary relevance judgements from the classifier, but its real-valued decision function, so that you can actually do ranking instead of just filtering. For an SVM, the decision function is the signed distance to the hyperplane. Good machine learning packages have an interface for getting the value of that.

Beyond that, look into pairwise and listwise learning to rank; what you're suggesting is the so-called pointwise approach. IIRC, pairwise works a lot better in practice. The reason is that with pairwise ranking, you need much fewer clicks: instead of having users label documents as relevant/irrelevant, you only give them the "relevant" button. Then you learn a binary classifier on triples (document1, document2, query) that tells whether document1 is more relevant to the query than document2, or vice versa. When a user labels, say, document 4 in the ranking as relevant, that gives you six samples to learn from:

  • document4 > document3
  • document4 > document2
  • document4 > document1
  • document1 < document4
  • document2 < document4
  • document3 < document4

so you get the negatives for free.

(These are all just suggestions, I haven't tried any of this. I just happen to have worked in a research group where people investigated learning to rank. I did do a presentation of someone else's paper for a reading group once, maybe the slides can be of help.)

Related Topic