nips nips2009 nips2009-24 nips2009-24-reference knowledge-graph by maker-knowledge-mining

24 nips-2009-Adapting to the Shifting Intent of Search Queries


Source: pdf

Author: Umar Syed, Aleksandrs Slivkins, Nina Mishra

Abstract: Search engines today present results that are often oblivious to recent shifts in intent. For example, the meaning of the query ‘independence day’ shifts in early July to a US holiday and to a movie around the time of the box office release. While no studies exactly quantify the magnitude of intent-shifting traffic, studies suggest that news events, seasonal topics, pop culture, etc account for 1/2 the search queries. This paper shows that the signals a search engine receives can be used to both determine that a shift in intent happened, as well as find a result that is now more relevant. We present a meta-algorithm that marries a classifier with a bandit algorithm to achieve regret that depends logarithmically on the number of query impressions, under certain assumptions. We provide strong evidence that this regret is close to the best achievable. Finally, via a series of experiments, we demonstrate that our algorithm outperforms prior approaches, particularly as the amount of intent-shifting traffic increases. 1


reference text

[1] Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, Nitin Motgi, Seung-Taek Park, Raghu Ramakrishnan, Scott Roy, and Joe Zachariah. Online models for content optimization. In 22nd Advances in Neural Information Processing Systems (NIPS), 2008.

[2] Peter Auer, Nicol` Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. o Machine Learning, 47(2-3):235–256, 2002.

[3] Peter Auer, Nicol` Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed o bandit problem. SIAM J. Comput., 32(1):48–77, 2002.

[4] Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1–7):107–117, 1998.

[5] Christopher J. C. Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Gregory N. Hullender. Learning to rank using gradient descent. In 22nd Intl. Conf. on Machine Learning (ICML), 2005.

[6] Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In 24th Intl. Conf. on Machine Learning (ICML), 2007.

[7] Nicol` Cesa-Bianchi and G´ bor Lugosi. Prediction, learning, and games. Cambridge University Press, o a 2006.

[8] William W. Cohen, Robert E. Schapire, and Yoram Singer. Learning to order things. J. of Artificial Intelligence Research, 10:243–270, 1999.

[9] Fernando Diaz. Integration of news content into web results. In 2nd Intl. Conf. on Web Search and Data Mining, pages 182–191, 2009.

[10] D. Fallows. Search engine users. Pew Internet and American Life Project, 2005.

[11] Yoav Freund, Raj Iyer, Robert E. Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. J. of Machine Learning Research, 4:933–969, 2003.

[12] Elad Hazan and Nimrod Megiddo. Online Learning with Prior Knowledge. In 20th Conference on Learning Theory (COLT), pages 499–513, 2007.

[13] Thorsten Joachims. Optimizing search engines using clickthrough data. In 8th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining (KDD), 2002.

[14] Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Efficient bandit algorithms for online multiclass prediction. In 25th Intl. Conf. on Machine Learning (ICML), 2008.

[15] Jon M. Kleinberg. Bursty and hierarchical structure in streams. In 8th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining (KDD), 2002.

[16] John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In 21st Advances in Neural Information Processing Systems (NIPS), 2007.

[17] Sandeep Pandey, Deepak Agarwal, Deepayan Chakrabarti, and Vanja Josifovski. Bandits for Taxonomies: A Model-based Approach. In SIAM Intl. Conf. on Data Mining (SDM), 2007.

[18] Sandeep Pandey, Deepayan Chakrabarti, and Deepak Agarwal. Multi-armed Bandit Problems with Dependent Arms. In 24th Intl. Conf. on Machine Learning (ICML), 2007.

[19] Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. Learning diverse rankings with multi-armed bandits. In 25th Intl. Conf. on Machine Learning (ICML), 2008.

[20] Umar Syed, Aleksandrs Slivkins, and Nina Mishra. Adapting to the shifting intent of search queries. Technical report. Available from arXiv.

[21] Chih-Chun Wang, Sanjeev R. Kulkarni, and H. Vincent Poor. Bandit problems with side observations. IEEE Trans. on Automatic Control, 50(3):338355, 2005.

[22] Jia Yuan Yu and Shie Mannor. Piecewise-stationary bandit problems with side observations. In 26th Intl. Conf. on Machine Learning (ICML), 2009. 9