MMM, Search!
An opinionated discussion of search metrics, models, and methods. Presented to the Wikimedia Foundation on April 27, 2020.
About the Speaker
Daniel Tunkelang is an independent consultant specializing in search, discovery, machine learning / AI, and data science.
He was a founding employee of Endeca, a search pioneer that Oracle acquired. After 10 years at Endeca, he moved to Google, where he led a local search team. He then served as a director of data science and search at LinkedIn.
After leaving LinkedIn in 2015, he became an independent consultant. His clients have included Apple, eBay, Coupang, Etsy, Flipkart, Gartner, Pinterest, Salesforce, and Yelp; as well as some of the largest traditional retailers.
Daniel completed undergraduate and master's degrees in Computer Science and Math at MIT and a Ph.D. in computer science at CMU. He wrote a book on Faceted Search, published by Morgan & Claypool, and he blogs on Medium about search-related topics -- particularly about query understanding. He is also active on Twitter, LinkedIn, and Quora.
3. Search is a process.
• Searchers
• start with information-seeking goals.
• express and elaborate those goals as queries.
• Search Engines
• translate queries into representations of intent.
• retrieve results relevant to that intent and rank them.
Communication isn’t perfect, so the process is iterative.
4. Search is many things.
• Known-Item search vs. exploratory search.
• Seeking specific item vs. knowing when you see it.
• Search is a means to an end, not the end itself.
• Getting information, shopping, communication, etc.
• It takes a lot of hard work to make search feel effortless.
• Indexing, query understanding, matching, ranking.
5. Metrics, Models, Methods
The most important decisions for a search engine are:
• Metrics: what we measure and optimize for.
• Models: how we model the search experience.
• Methods: how we help searchers achieve success.
7. Metrics:
What do we need to know?
• Binary Relevance
• Are searchers finding relevant results?
• Session Success
• How often are search sessions successful?
• Search Efficiency
• How much effort are searchers making?
8. Binary Relevance
Relevance is a measure of
information conveyed by a
document relative to a query.
Relationship between document
and query, though necessary, is not
sufficient to determine relevance.
William Goffman, 1964
10. Example: Email
• Can Google decide which of my emails are important?
• ¯_(ツ)_/¯
• Can Google decide which of my emails are spam?
• Definitely!
11. Measure Binary Relevance!
• Build a (query, document) binary relevance model.
• (we’ll get back to that in a moment)
• Embrace positional bias: measure at top ranks.
• Can use top k results or weighted sample.
• Stratify for meaningful query and user segments.
• Leverage query classification and user data.
14. Measure Session Success!
• Measure session conversion, not just query conversion.
• Much better proxy for user’s success!
• Compute metrics based on first query of session.
• Distribution of journeys for common intent.
• Segment sessions into tasks? Maybe, but optional.
• Multi-task sessions uncommon; treat as noise.
16. Searching is not fun.
Having found is fun.
• If search is too hard or takes too long, searchers give up.
• Compare successful and unsuccessful sessions.
• Measure how much time searchers spend in sessions.
• Especially time on search rather than results.
• Measure searcher effort.
• Pagination, reformulation, refinement, etc.
17. Metrics: Summary
• Binary Relevance
• Are searchers finding relevant results?
• Session Success
• How often are search sessions successful?
• Search Efficiency
• How much effort are searchers making?
19. Models:
What do we model and how?
• Query Categorization
• What is the primary domain for a query?
• Query Similarity
• Do two queries express similar / identical intent?
• Binary Relevance
• How to estimate relevance of results to queries?
21. Search starts with query understanding.
Query understanding starts with categorization.
• Map query to a primary content taxonomy.
• Subject, product type, domain, etc.
• Identify high-level intent, independent of content interest.
• Title, category, brand, site help, etc.
• Categories should be coherent, distinctive, and useful.
• Good categorization requires good categories.
22. How to Train Your
Query Categorization Model
• Label your most frequent head queries manually.
• Top 1000 queries are probably worth it.
• For torso queries, infer categories from engagement.
• Looking for overwhelmingly dominant category.
• Now train a model using labeled head and torso queries.
• This training data is biased, but manageably so.
• No need to use fancy deep learning / AI. Try fastText.
24. Query ambiguity is rare.
Query similarity is common.
• Some queries do not express a clear intent, but most do.
• Most “ambiguous” queries turn out to be broad.
• Bigger opportunity: multiple queries express same intent.
• Or at least the same distribution of intents.
• Recognizing similar / identical queries is huge opportunity.
• Query rewriting, aggregating signals, etc.
25. How to Model
Query Similarity
• Start with the simple stuff: shallow query canonicalization.
• Character normalization, stemming, word order.
• Look at edit distance, especially for spelling errors.
• Tail queries at edit distance 1 from head queries.
• Compare embeddings of queries and results.
• Especially to keep the other methods honest.
27. Focus on simplest question.
• Worry whether a result is relevant or non-relevant.
• Relevant vs. more relevant is often subjective.
• Assume that query understanding has done its job first.
• Result relevance depends on query understanding.
• Assume that relevance is objective and universal.
• Personalization: a nice-to-have, not a must-have.
28. How to Train Your
Binary Relevance Model
• Collect human binary relevance judgments. Lots of them.
• Quantity is more important than quality.
• Pay attention to query distribution and stratify sample.
• Collect judgements that teach you something.
• Come to terms with presentation and position bias.
• Users mostly interact with top-ranked results.
29. Models: Summary
• Query Categorization
• Simple model to map query to primary intent.
• Query Similarity
• Recognize queries with same or similar intent.
• Binary Relevance
• Use human judgments to train relevance model.
31. Methods:
What are some useful tricks?
• Optimize for Query Performance
• Help searchers make better queries.
• Map Tail Queries to Head Intents
• Searchers aren’t as unique as you think!
• Learn from Successful Sessions
• Help others discovers successful paths.
33. • Expected searcher success for query.
• Function of query, not of any particular result.
• Can use any measure of searcher success.
• But consider focusing on session success.
• Can incorporate sorting, refinement, or other factors.
• But keep it simple. Query is probably enough.
What is query performance?
34. Best way to predict query performance?
Historical query performance.
35. Stuck in the tail? No data?
These methods can help.
36. Predict query performance.
Then optimize for it.
• Consider every surface where you suggest queries.
• Autocomplete, guides, related searches, etc.
• Offer suggestions with high predicted performance.
• Or at least nudge users wherever possible.
• Use query rewriting to improve query performance.
• Rewrite to similar, high-performing queries.
38. Many tail queries
express head intents.
• Misspelled queries are often misspellings of head queries.
• Common misspellings are uncommon.
• Many queries have a dominant singular or plural form.
• Often, though not always, the same intent.
• Also word order or other grammatical transformations.
• Such removal of low-information / noise words.
39. Rewrite tail queries!
• Prioritize correcting misspellings of head queries.
• Be more aggressive, skip tokenization, etc.
• Look for head queries equivalent to tail queries.
• Stemming, reordering terms, dropping noise words.
• But check to make sure intent is actually preserved!
• Remember earlier discussion of query similarity.
41. Successful searchers
can help everyone else.
• Some queries lead to great performance for everyone.
• e.g., known-item searches by name or title.
• But for some queries, performance is user-dependent.
• Some users are more sophisticated or persistent.
• Successful users discovers successful paths.
• Use trails of successful users to build shortcuts!
42. Optimize complex journeys.
• Detect the searches for which searchers need help.
• Queries for which successful sessions are long.
• Find the actions that successful searchers take.
• Category / facet refinements, reformulations.
• Promote those actions in the search experience.
• Create shortcuts in the navigational landscape.
43. Methods: Summary
• Optimize for Query Performance
• Suggest better queries and rewrite others.
• Map Tail Queries to Head Intents
• Rewrite tail queries as similar head queries.
• Learn from Successful Sessions
• Create shortcuts based on successful paths.
44. Putting It All Together
• Metrics, models, and methods — they all matter.
• Query understanding first, then result relevance.
• Binary result relevance first, then result ranking.
• Session performance, not just query performance.
• Get as much leverage as possible from head queries.
45. Thank You!
• More Resources
• Query Understanding
https://queryunderstanding.com/
• My Medium (not just about search)
https://medium.com/@dtunkelang
• Contact me directly!
dtunkelang@gmail.com