nips nips2013 nips2013-311 nips2013-311-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mehrdad Mahdavi, Tianbao Yang, Rong Jin
Abstract: In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method √ conin strained optimization and attains the optimal convergence rate of O(1/ T ) in high probability for general Lipschitz continuous objectives.
[1] F. B. Abdelaziz. Solution approaches for the multiobjective stochastic programming. European Journal of Operational Research, 216(1):1–16, 2012.
[2] F. B. Abdelaziz, B. Aouni, and R. E. Fayedh. Multi-objective stochastic programming for portfolio selection. European Journal of Operational Research, 177(3):1811–1823, 2007.
[3] A. Agarwal, P. L. Bartlett, P. D. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions on Information Theory, 58(5):3235–3249, 2012.
[4] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In NIPS, pages 451–459, 2011.
[5] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust optimization. Princeton University Press, 2009.
[6] S. Boucheron, G. Lugosi, and O. Bousquet. Concentration inequalities. In Advanced Lectures on Machine Learning, pages 208–240, 2003.
[7] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[8] R. Caballero, E. Cerd´ , M. del Mar Mu˜ oz, and L. Rey. Stochastic approach versus multia n objective approach for obtaining efficient solutions in stochastic multiobjective programming problems. European Journal of Operational Research, 158(3):633–648, 2004.
[9] M. Ehrgott. Multicriteria optimization. Springer, 2005.
[10] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. Journal of Machine Learning Research - Proceedings Track, 19:421–436, 2011.
[11] K.-J. Hsiao, K. S. Xu, J. Calder, and A. O. H. III. Multi-criteria anomaly detection using pareto depth analysis. In NIPS, pages 854–862, 2012.
[12] Y. Jin and B. Sendhoff. Pareto-based multiobjective machine learning: An overview and case studies. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 38(3):397–415, 2008.
[13] M. Mahdavi, R. Jin, and T. Yang. Trading regret for efficiency: online convex optimization with long term constraints. JMLR, 13:2465–2490, 2012.
[14] S. Mannor, J. N. Tsitsiklis, and J. Y. Yu. Online learning with sample path constraints. Journal of Machine Learning Research, 10:569–590, 2009.
[15] H. Markowitz. Portfolio selection. The journal of finance, 7(1):77–91, 1952.
[16] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, Available at http://www2.isye.gatech.edu/ nemirovs, 1994.
[17] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. on Optimization, 19:1574–1609, 2009.
[18] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In ICML, 2012.
[19] P. Rigollet and X. Tong. Neyman-pearson classification, convexity and stochastic constraints. The Journal of Machine Learning Research, 12:2831–2855, 2011.
[20] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2012.
[21] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In COLT, 2009.
[22] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In ICML, pages 807–814, 2007.
[23] K. Sridharan. Learning from an optimization viewpoint. PhD Thesis, 2012.
[24] K. M. Svore, M. N. Volkovs, and C. J. Burges. Learning to rank with multiple objective functions. In WWW, pages 367–376. ACM, 2011.
[25] H. Xu and F. Meng. Convergence analysis of sample average approximation methods for a class of stochastic mathematical programs with equality constraints. Mathematics of Operations Research, 32(3):648–668, 2007.
[26] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, pages 928–936, 2003. 9