jmlr jmlr2011 jmlr2011-22 jmlr2011-22-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Kamalika Chaudhuri, Claire Monteleoni, Anand D. Sarwate
Abstract: Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacypreserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. Keywords: privacy, classification, optimization, empirical risk minimization, support vector machines, logistic regression
R. Agrawal and R. Srikant. Privacy-preserving data mining. SIGMOD Record, 29(2):439–450, 2000. A. Asuncion and D.J. Newman. UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences, 2007. URL http://www.ics.uci.edu/∼mlearn/MLRepository.html. 1105 C HAUDHURI , M ONTELEONI AND S ARWATE L. Backstrom, C. Dwork, and J. Kleinberg. Wherefore art thou R3579X? Anonymized social networks, hidden patterns, and structural steganography. In Proceedings of the 16th International World Wide Web Conference, 2007. K. Ball. An elementary introduction to modern convex geometry. In S. Levy, editor, Flavors of Geometry, volume 31 of Mathematical Sciences Research Institute Publications, pages 1–58. Cambridge University Press, 1997. B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: a holistic solution to contingency table release. In Proceedings of the 26th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS), pages 273–282, 2007. A. Beimel, S. P. Kasiviswanathan, and K. Nissim. Bounds on the sample complexity for private learning and private data release. In Proceedings of the 7th IACR Theory of Cryptography Conference (TCC), pages 437–454, 2010. P Billingsley. Probability and measure. A Wiley-Interscience publication. Wiley, New York [u.a.], 3. ed edition, 1995. ISBN 0471007102. A. Blum, K. Ligett, and A. Roth. A learning theory approach to non-interactive database privacy. In R. E. Ladner and C. Dwork, editors, Proceedings of the 40th ACM Symposium on Theory of Computing (STOC), pages 609–618. ACM, 2008. ISBN 978-1-60558-047-0. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK, 2004. O. Chapelle. Training a support vector machine in the primal. Neural Computation, 19(5):1155– 1178, May 2007. K. Chaudhuri and N. Mishra. When random sampling preserves privacy. In Cynthia Dwork, editor, CRYPTO, volume 4117 of Lecture Notes in Computer Science, pages 198–213. Springer, 2006. ISBN 3-540-37432-9. K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In Proceedings of the 22nd Annual Conference on Neural Information Processing Systems (NIPS), 2008. C. Dwork. Differential privacy. In M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener, editors, ICALP (2), volume 4052 of Lecture Notes in Computer Science, pages 1–12. Springer, 2006. ISBN 3-540-35907-9. C. Dwork and J. Lei. Differential privacy and robust statistics. In Proceedings of the 41st ACM Symposium on Theory of Computing (STOC), 2009. C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via distributed noise generation. In Serge Vaudenay, editor, EUROCRYPT, volume 4004 of Lecture Notes in Computer Science, pages 486–503. Springer, 2006a. C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In 3rd IACR Theory of Cryptography Conference,, pages 265–284, 2006b. 1106 D IFFERENTIALLY P RIVATE ERM A. Evfimievski, J. Gehrke, and R. Srikant. Limiting privacy breaches in privacy preserving data mining. In Proceedings of the 22nd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS), pages 211–222, 2003. S.R. Ganta, S.P. Kasiviswanathan, and A. Smith. Composition attacks and auxiliary information in data privacy. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 265–273, 2008. A. Gupta, K. Ligett, F. McSherry, A. Roth, and K. Talwar. Differentially private approximation algorithms. In Proceedings of the 2010 ACM-SIAM Symposium on Discrete Algorithms (SODA), 2010. T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5:1391–1415, 2004. S. Hettich and S.D. Bay. The UCI KDD Archive. University of California, Irvine, Department of Information and Computer Science, 1999. URL http://kdd.ics.uci.edu. N. Homer, S. Szelinger, M. Redman, D. Duggan, W. Tembe, J. Muehling, J. V. Pearson, D. A. Stephan, S. F. Nelson, and D. W. Craig. Resolving individuals contributing trace amounts of dna to highly complex mixtures using high-density snp genotyping microarrays. PLoS Genetics, 4 (8):e1000167, 08 2008. R. Jones, R. Kumar, B. Pang, and A. Tomkins. ”i know what you did last summer”: query logs and user privacy. In CIKM ’07: Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, pages 909–914, New York, NY, USA, 2007. ACM. ISBN 978-159593-803-9. S.P. Kasivishwanathan, M. Rudelson, A. Smith, and J. Ullman. The price of privately releasing contingency tables and the spectra of random matrices with correlated rows. In Proceedings of the 42nd ACM Symposium on Theory of Computing (STOC), 2010. S. A. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith. What can we learn privately? In Proc. of FOCS, 2008. G.S. Kimeldorf and G. Wahba. A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Annals of Mathematical Statistics, 41:495–502, 1970. S. Laur, H. Lipmaa, and T. Mielik¨ inen. Cryptographically private support vector machines. In a Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 618–624, 2006. A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam. l-diversity: Privacy beyond k-anonymity. In Proceedings of the 22nd International Conference on Data Engineering (ICDE), 2006. A. Machanavajjhala, D. Kifer, J. M. Abowd, J. Gehrke, and L. Vilhuber. Privacy: Theory meets practice on the map. In Proceedings of the 24th International Conference on Data Engineering (ICDE), pages 277–286, 2008. 1107 C HAUDHURI , M ONTELEONI AND S ARWATE O. L. Mangasarian, E. W. Wild, and G. Fung. Privacy-preserving classification of vertically partitioned data via random kernels. ACM Transactions on Knowledge Discovery from Data, 2(3), 2008. F. McSherry and K. Talwar. Mechanism design via differential privacy. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 94–103, 2007. A. Narayanan and V. Shmatikov. Robust de-anonymization of large sparse datasets (how to break anonymity of the netflix prize dataset). In Proceedings of 29th IEEE Symposium on Security and Privacy, pages 111–125, 2008. K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. In David S. Johnson and Uriel Feige, editors, Proceedings of the 39th ACM Symposium on the Theory of Computing (STOC), pages 75–84. ACM, 2007. ISBN 978-1-59593-631-8. N. Okazaki. liblbfgs: a library of limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS). 2009. URL http://www.chokkan.org/software/liblbfgs/index.html. A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Proceedings of the 21st Annual Conference on Neural Information Processing Systems (NIPS), 2007. A. Rahimi and B. Recht. Uniform approximation of functions with random bases. In Proceedings of the 46th Allerton Conference on Communication, Control, and Computing, 2008a. A. Rahimi and B. Recht. Weighted sums of random kitchen sinks : Replacing minimization with randomization in learning. In Proceedings of the 22nd Annual Conference on Neural Information Processing Systems (NIPS), 2008b. R.T. Rockafellar and R J-B. Wets. Variational Analysis. Springer, Berlin, 1998. B. I. P. Rubinstein, P. L. Bartlett, L. Huang, and N. Taft. Learning in a large function space: Privacypreserving mechanisms for SVM learning. In http://arxiv.org/abs/0911.5708, 2009. S. Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, The Hebrew University of Jerusalem, July 2007. S. Shalev-Shwartz and N. Srebro. SVM optimization : Inverse dependence on training set size. In The 25th International Conference on Machine Learning (ICML), 2008. K. Sridharan, N. Srebro, and S. Shalev-Shwartz. Fast rates for regularized objectives. In Proceedings of the 22nd Annual Conference on Neural Information Processing Systems (NIPS), 2008. L. Sweeney. Weaving technology and policy together to maintain confidentiality. Journal of Law, Medicine and Ethics, 25:98–110, 1997. L. Sweeney. k-anonymity: a model for protecting privacy. International Journal on Uncertainty, Fuzziness and Knowledge-Based Systems, 2002. V. Vapnik. Statistical Learning Theory. John Wiley & Sons, New York, 1998. 1108 D IFFERENTIALLY P RIVATE ERM R. Wang, Y. F. Li, X. Wang, H. Tang, and X.-Y. Zhou. Learning your identity and disease from research papers: information leaks in genome wide association study. In ACM Conference on Computer and Communications Security, pages 534–544, 2009. L. Wasserman and S. Zhou. A statistical framework for differential privacy. Journal of the American Statistical Association, 105(489):375–389, 2010. A. C.-C. Yao. Protocols for secure computations (extended abstract). In 23rd Annual Symposium on Foundations of Computer Science (FOCS), pages 160–164, 1982. J. Z. Zhan and S. Matwin. Privacy-preserving support vector machine classification. International Journal of Intelligent Information and Database Systems, 1(3/4):356–385, 2007. S. Zhou, K. Ligett, and L. Wasserman. Differential privacy with compression. In Proceedings of the 2009 International Symposium on Information Theory, Seoul, South Korea, 2009. 1109