login/register

Snip!t from collection of Alan Dix

see all channels for Alan Dix

Snip
summary

Before we delve into the details, here's an overview of ...
First, we monitor for which search queries are curre ...
Behind the scenes: we run a Storm topology that trac ...
For example, the query [Big Bird] may suddenly see a ...
... ckobama

Twitter Engineering: Improving Twitter search with real-time human computation
http://engineering.twitter.com/2.../improving-twitter-search-with-real-time.html

Categories

/Channels/techie/collective intelligence

[ go to category ]

For Snip

loading snip actions ...

For Page

loading url actions ...

Before we delve into the details, here's an overview of how the system works.

  1. First, we monitor for which search queries are currently popular.
    Behind the scenes: we run a Storm topology that tracks statistics on search queries.
    For example, the query [Big Bird] may suddenly see a spike in searches from the US.

  2. As soon as we discover a new popular search query, we send it to our human evaluators, who are asked a variety of questions about the query.
    Behind the scenes: when the Storm topology detects that a query has reached sufficient popularity, it connects to a Thrift API that dispatches the query to Amazon's Mechanical Turk service, and then polls Mechanical Turk for a response.
    For example: as soon as we notice "Big Bird" spiking, we may ask judges on Mechanical Turk to categorize the query, or provide other information (e.g., whether there are likely to be interesting pictures of the query, or whether the query is about a person or an event) that helps us serve relevant Tweets and ads.

  3. Finally, after a response from an evaluator is received, we push the information to our backend systems, so that the next time a user searches for a query, our machine learning models will make use of the additional information. For example, suppose our evaluators tell us that [Big Bird] is related to politics; the next time someone performs this search, we know to surface ads by @barackobama or @mittromney, not ads about Dora the Explorer.

HTML

<p>Before we delve into the details, here's an overview of how the system works. </p><ol><li>First, we monitor for which search queries are currently popular.<br> Behind the scenes: we run a <a href="http://engineering.twitter.com/2011/08/storm-is-coming-more-details-and-plans.html">Storm</a> topology that tracks statistics on search queries.<br> For example, the query [Big Bird] may suddenly see a spike in searches from the US.<br> <br> </li><li>As soon as we discover a new popular search query, we send it to our human evaluators, who are asked a variety of questions about the query.<br> Behind the scenes: when the Storm topology detects that a query has reached sufficient popularity, it connects to a Thrift API that dispatches the query to Amazon's Mechanical Turk service, and then polls Mechanical Turk for a response.<br> For example: as soon as we notice "Big Bird" spiking, we may ask judges on Mechanical Turk to categorize the query, or provide other information (e.g., whether there are likely to be interesting pictures of the query, or whether the query is about a person or an event) that helps us serve relevant Tweets and ads.<br> <br> </li><li>Finally, after a response from an evaluator is received, we push the information to our backend systems, so that the next time a user searches for a query, our machine learning models will make use of the additional information. For example, suppose our evaluators tell us that [Big Bird] is related to politics; the next time someone performs this search, we know to surface ads by @barackobama or @mittromney, not ads about Dora the Explorer.</li></ol>