Tuesday, November 5, 2013

The Case for Slow Search

As discussed in a previous post, Web searchers expect search engines to return results instantaneously. To meet these expectations, search engines make many compromises to shave milliseconds off their response time. But it is ironic that a few milliseconds matter so much when over half of our interactions with a search engine involve multiple queries and take minutes or even hours. Just think, for example, of the last time you planned a vacation or researched a potential medical diagnosis. For these tasks, the quality of the experience – and not speed – is what matters.

While someone searching for a specific website or a straightforward fact will always want an immediate response, people invest a significant amount of time in more complex or exploratory search tasks. Not surprisingly, search engine response time, while important for all types of tasks, impacts different types of tasks differently. Millisecond differences in response time negatively impact navigational queries more than they do information queries.

In recent years a number of “slow movements” have emerged that advocate for reducing speed in exchange for increasing quality. You are probably familiar with the slow food movement, which proposes using traditional and regional food preparation as an alternative to fast food. Other examples include slow parenting, slow travel, and even slow science. Building on these movements, Kevyn, Ryen, Sue, and I are exploring the concept of slow search, where a nuanced notion of time is employed to create a high quality search experience. Slow search can be used to help users take the necessary time to learn as they search, understand their sources, and explore tangents, as well as to algorithmically identify high quality, relevant information. Much of our early efforts in understanding slow search has focused on how search engines might make use of additional time to produce better results.

Algorithmic slow search approaches are particularly valuable when people have intermittent, slow, or expensive network connections. In such cases it can be difficult for searchers to employ traditional search strategies, such as rapidly reformulating queries. You are probably familiar with a type of slow search, but call it “mobile search.” Because mobile phones have limited bandwidth, slower search processing times may be acceptable given most of the latency a searcher observes is caused by network latencies in fetching data to the device. Search engines designed to support search in rural regions already make use of additional time to help searchers limit the number of iterations necessary to find what they are looking for. Likewise, future space travelers may also appreciate slow search. It takes over 25 minutes for information to travel from Mars to Earth and back again. If a search engine were to take an additional few minutes to identify better results during the round trip, it is unlikely that the searcher would even notice the extra time invested.

Algorithmic slow search approaches can also be used to proactively identify content that a user is likely to search for in the future. For example, it is now possible to predict if an individual will resume a search task at a later date. Search engines can make use of the time between sessions to slowly produce high-quality search results that could then be presented immediately when a search task is resumed.

The question, of course, is how can a search engine use extra time to actually produce better results? We have invested so much effort into making search engines as fast as possible that it is almost impossible to imagine what search should look like without time as a constraint. But I am excited to try!

Related paper:
J. Teevan, K. Collins-Thompson, R. W. White, S. T. Dumais and Y. Kim. Slow Search: Information Retrieval without Time Constraints. HCIR 2013.


  1. Hi, Jaime. You know, on this, I wonder if a problem with the idea of extra computational power producing better results is the lack of information around intent.

    The problem with the top three search results usually isn't that we haven't been able to spend enough time thinking, but that the queries are so short that we just don't have the information about what the searcher wants. Worse, the searcher might not even know what they want. Searchers often need more information themselves before even fully understanding themselves what they need.

    So, perhaps the solution is more in the search-as-a-dialogue approach, a rapid back-and-forth of query giving information to the searcher which yields a more specific query based on that information, giving yet more information, and yet another query, refining the way down to what hopefully is a satisfying answer. That still, by the way, has a lot of the flavor of personalized search (since your immediately previous queries would influence the next) and, I think, is badly underdone (given how few people look below the first three search results, maybe we should consider using all the rest of the space on the page to try ways to engage in this dialogue and help with refinement).

    All these rambling aside about intent, dialogues, and how more computation on each query might not help, I'm sure you're familiar with Wolfram Alpha, and they're a pretty nice example of at least a first step toward trying to spend more computational power and give a more complete answer on individual queries. I don't know how successful they've been with that -- when I've used it, my failure to specify what I want (or even fully know what I want) in the first query has always been a problem -- but it seems like a nice example of what you are proposing.

    By the way, I hope you don't mind me commenting here, couldn't resist trying to start a discussion with you on this one. Oh, and I love your blog, it's a fun way to see more of your thoughts outside of what gets published more formally in your papers, please do keep posting.

  2. Yes! I definitely agree that people need help expressing rich intents -- although, as you know, we can also use context to infer a surprising amount about what they want. Slow search approaches can't just take a short, two-word query and run with it, but instead need to help the searcher accurately describe what they want. Diane Kelly had some interesting work on this at SIGIR a while back.

    But you also bring up another important value to the time people currently spend searching, related to the fact that you don't always "even fully know what [you] want." Not only does a search engine learn about the searcher's intent over the course of a session, but the searcher learns about the search space in a way that helps them understand the final results. We've recently done some work to understand what people learn and how so that slow search approaches can preserve that important aspect -- perhaps that would be a fun thing to write a little more about.

    I love having you comment here, Greg -- that's why the blog exists! Thanks.

    P.S. We cite a blog post of yours (http://bit.ly/2J2amv) in an upcoming CACM article on Slow Search. :)

  3. Gang: Sorry that I'm late to the party. Jamie, I like that you're pushing this slow search concept. I first came across it as well, back in 2006, via the exact same Greg Linden blog post that you cite. And had many discussions with Greg on his blog around that time.

    I thought about it a lot between 2006 and 2008 when I was working on the algorithmically-mediated collaborative search stuff. Because if you think about it, when you're working with another person on a difficult search task, you don't really need your partner's algorithmic influence to alter your own search results within 200 ms of your partner taking some action. Your might not even need to know for five whole minutes what your partner has been up to, because you're busy reading some other document that you'd found. So if it takes five minutes rather than 200 ms for your partner's activity to propagate itself onto your search results, in that algorithmically-mediated way, I doubt either of you would even notice. And since you both wouldn't notice, you both would probably prefer it take five minutes for that calculation to happen, if the results ended up being significantly better, rather than 200 ms for results that were only marginally better.

    In 2009, I ended up writing about this idea of getting great results in longer time rather than decent results in shorter time on my personal blog. I don't know if this would either be of any interest or use to you, but here are a few select posts from that time period.

    The first (April 2009) is literally titled "More and Faster versus Smarter and More Effective":


    The second (June 2009) attempts to strike at the heart of the matter by suggesting that the problem may lie in the metrics that we're trying to optimize. With the wrong metrics, we'll design the wrong kinds of systems to satisfy information needs. Speed does matter.. but speed of what?


    In another post (April 2009) I elaborate on the troubles I've had with search engines that give me really fast popular answers, but cause me to spend dozens of minutes trying to instruct the system in what I really want to find, which isn't always the popular answers. The amount of time (milliseconds) that I save in getting a fast answer pales in comparison to the amount of time that I waste (20 minutes) trying to come up with the a query that didn't continue delivering me into the heart of the popular results.


    Finally, one final post that I'll share (March 2009) isn't about speed, specifically. But it is about the idea of giving users a way to practice and get better at searching, thereby instilling both confidence and passion. Maybe it's a stretch, but I tend to thing that engines that focus on speed do so to the detriment of bolstering user virtuosity.


  4. And speaking of the Diane Kelly paper, are you talking about the "Query Length in Interactive Retrieval" one from SIGIR 2003? I wrote a little about that in 2009 as well, and how the focus on shorter, simpler (i.e. "fast") queries runs counterproductive to getting better results. Naturally longer queries will take longer -- more inverted lists to join. But if the results are better with longer queries, then that's the whole point of slow search.


    I just noticed that the "Guidelines to Better Search" web page that I linked to no longer contains the quoted text that it did in 2009. The new text, however, is similar in flavor, and still convey the same "fast search" rather than "slow search" attitude. You can see how "fast search" has such mindshare in the community; it's nice to see you pushing against it.

    Speaking of slow search, the domain in which I've been working for the past half decade is eDiscovery. In this domain, user sessions on a single information need don't just last minutes or even hours. They last months. One information need, and months of searching, exploration, learning. And I've seen actual scenarios in which users gladly wait overnight, 10 to 12 hours, to get answers to their queries. Not 200 milliseconds, not even 5 minutes. But half a day.

    I know eDiscovery search users are not web search users. I simply wish to note that there are information retrieval domains out there, other than web search, in which the slow search concept is highly relevant. So, keep up the good work!

  5. Do not forget about research paper’s format & structure nuances like size (number of pages and words). A student receives an enormous space for imagination and expression when working on the research paper (up to 5,000 words), have a glimpse at this link to find more!