Monday, September 9, 2013

Search Engines' Quest for Speed



I remember seeing the World Wide Web for the first time in 1994. I had just started college, and was in a dungeon of a computer lab with a friend of mine (who I happened to marry seven years later). He pulled up Lynx, an early text-based browser, and showed me a page of text that physically resided on a remote computer. We could visit the URLs that he knew, read the text available, and tab between the underlined words to navigate to other pages of text – but that was it. The whole experience was all together underwhelming. It wasn’t until I discovered search engines that the Web really seemed to come alive. 

As an end user, search engines seemed like magic to me. When I first went to work for Infoseek in 1997 (a popular Web search engine at 7 million visitors per month), I assumed that search engines looked for content just like a person might. They received a query, went out to the Web and followed a bunch of links to see if the landing page was relevant to the query, and then created a list of the relevant pages to return. Of course, even though computers are much faster than people at following links and the Web was much smaller back then, an exhaustive search following every query would have been ridiculously slow. I quickly learned that search engines rely on many tricks to provide fast responses. 

Yahoo, for example, didn’t actually search the web at all at the time. Instead, they employed people to manually curate a directory of links, and returned the appropriate static list of links from their directory in response to common queries. Infoseek dynamically created its result lists, but to save time it did this using a local copy of the Web. Prior to serving up any search results, Infoseek visited all of the webpages on the Web and created an inverted index that mapped each unique word encountered on the crawl to a list of the webpages that it encountered that contained it. This allowed for fast query-time matching between query terms and webpages, but ignored the rich semantics of text and the dynamics of the Web. 

Building on these early approaches, modern Web search engines continue to make a number of compromises to achieve near-instantaneous speed. They limit the complexity of the features and models used to identify relevant documents and make highly simplistic assumptions about language. Time-saving mechanisms such as search-result caching and index tiering are heavily exploited, despite the risk that such approaches may cause relevant content to be missed. 

Web search engines target speed for a good reason: People perceive search results that are delivered quickly as higher quality and more engaging than slower results. This means that the exact same result list will seem more relevant to you if it is returned just a fraction of a second faster. 

The impact of search result speed on the user experience has been studied by looking at how people interact with results that are returned at different speeds. Some variation in search engine response time occurs naturally. If you were to issue the same query to a search engine twice and time the response you received, you would almost certainly find a difference due to minor variations in exactly how the query was processed. Variation can also be introduced artificially. Both Bing and Google have run experiments where they intentionally increase the load time of their search result pages by a fraction of a second. 

Search engines measure the quality of a user’s experience with a result page by looking at observable user behavior. For example, if a user clicks on a result following a query, that probably indicates that they found something that seems relevant. Likewise, if they make that click quickly they probably found the result easily. By comparing this observable behavior across slow and fast search results, researchers have shown that when results are even just one tenth of a second slower, people click less, click slower, and search less. 

Speed is so important in search that even improvements that seem like they should positively impact the search experience can have negative outcomes if they slow the response time down. For example, when Google experimented with returning 30 results instead of 10, they found that the number of searches and revenue dropped significantly because the additional results took a half-second longer to load.

Many innovations in search start out negatively impacting response time. It takes time, money, and effort to optimize very different approaches to be as fast as existing ones. A challenge for those of us working to create the best search experience possible is to think boldly about how we can innovate in the face of this constant pressure to be fast.
 
Related paper:

No comments:

Post a Comment