nips nips2013 nips2013-25 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Krzysztof M. Choromanski, Tony Jebara, Kui Tang
Abstract: The adaptive anonymity problem is formalized where each individual shares their data along with an integer value to indicate their personal level of desired privacy. This problem leads to a generalization of k-anonymity to the b-matching setting. Novel algorithms and theory are provided to implement this type of anonymity. The relaxation achieves better utility, admits theoretical privacy guarantees that are as strong, and, most importantly, accommodates a variable level of anonymity for each individual. Empirical results confirm improved utility on benchmark and social data-sets.
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract The adaptive anonymity problem is formalized where each individual shares their data along with an integer value to indicate their personal level of desired privacy. [sent-4, score-0.829]
2 The relaxation achieves better utility, admits theoretical privacy guarantees that are as strong, and, most importantly, accommodates a variable level of anonymity for each individual. [sent-7, score-1.026]
3 If the data contains sensitive information, it is necessary to protect it with privacy guarantees while maintaining some notion of data utility [18, 2, 24]. [sent-10, score-0.404]
4 However, the acceptable anonymity and comfort-level of each individual in a population can vary widely. [sent-14, score-0.724]
5 This article explores the adaptive anonymity setting and shows how to generalize the k-anonymity framework to handle it. [sent-15, score-0.816]
6 § 2 formalizes the adaptive anonymity problem and shows how k-anonymity does not handle it. [sent-22, score-0.777]
7 This leads to a relaxation of k-anonymity into symmetric and asymmetric bipartite regular compatibility graphs. [sent-23, score-0.438]
8 § 3 provides algorithms for maximizing utility under these relaxed privacy criteria. [sent-24, score-0.381]
9 § 4 provides theorems to ensure the privacy of these relaxed criteria for uniform anonymity as well as for adaptive anonymity. [sent-25, score-1.013]
10 2 Adaptive anonymity and necessary relaxations to k-anonymity The adaptive anonymity problem considers a data-set X ∈ Zn×d consisting of n ∈ N observations {x1 , . [sent-28, score-1.481]
11 Furthermore, each user i provides an adaptive anonymity parameter δi ∈ N they desire to keep when the database is released. [sent-33, score-0.886]
12 Given such a data-set and anonymity parameters, we wish to output an obfuscated data-set denoted by Y ∈ {Z ∪ ∗}n×d which consists of vectors {y1 , . [sent-34, score-0.803]
13 The most pervasive method for anonymity in the released data is the k-anonymity method [19, 1]. [sent-45, score-0.784]
14 We will show that the idea of k − 1 copies can be understood as forming a compatibility graph between the original database and the released database which is composed of several fully-connected k-cliques. [sent-49, score-0.412]
15 However, rather than guaranteeing copies or cliques, the anonymity problem can be relaxed into a k-regular compatibility to achieve nearly identical resilience to attack. [sent-50, score-0.952]
16 More interestingly, this relaxation will naturally allow users to select different δi anonymity values or degrees in the compatibility graph and allow them to achieve their desired personal protection level. [sent-51, score-1.119]
17 Why can’t k-anonymity handle heterogeneous anonymity levels δi ? [sent-52, score-0.763]
18 Consider the case where the population contains many liberal users with very low anonymity levels yet one single paranoid user (user i) wants to have a maximal anonymity with δi = n. [sent-53, score-1.653]
19 We aim to release an obfuscated database Y and its keys with the possibility that an adversary may have access to all or a subset of X and the identities. [sent-70, score-0.46]
20 Thus, the attack we seek to protect against is the use of the data to match usernames to keys (rather than attacks in which additional non-sensitive attributes about a user are discovered). [sent-73, score-0.519]
21 In the extreme case, it is easy to see that replacing all of Y with ∗ symbols will result in an attack success probability of 1/n if the adversary attempts a single random attack-pair (username and key). [sent-75, score-0.391]
22 Meanwhile, releasing a database Y = X with keys could allow the adversary to succeed with an initial attack with probability 1. [sent-76, score-0.519]
23 We will instead use a compatibility graph G to more precisely characterize how elements are indistinguishable in the data-sets and which entries of Y are compatible with entries in the original data-set X. [sent-80, score-0.318]
24 The graph places edges between entries of X which are compatible with entries of Y. [sent-81, score-0.317]
25 Clearly, G is an undirected bipartite graph containing two equal-sized partitions (or color-classes) of nodes A and B each of cardinality n where A = {a1 , . [sent-82, score-0.309]
26 1 Let G(A, B) be a bipartite graph with color classes: A, B where A = {a1 , . [sent-94, score-0.332]
27 We call a k-regular bipartite graph G(A, B) a clique-bipartite graph if it is a union of pairwise disjoint and nonadjacent complete k-regular bipartite graphs. [sent-101, score-0.618]
28 2 Let G(A, B) be a bipartite graph with color classes: A, B where A = {a1 , . [sent-106, score-0.332]
29 n,δ n,δ This article introduces graph families Gb and Gs to enforce privacy since these are relaxations n,b of the family Gk as previously explored in k-anonymity research. [sent-117, score-0.451]
30 Furthermore, they will allow us to permit adaptive anonymity levels across the users in the database. [sent-119, score-0.825]
31 username alice bob carol dave eve fred key 1 0 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 0 1 1 0 1 * * * * * * 0 0 0 0 1 1 0 0 1 1 * * 0 0 1 1 * * ggacta tacaga ctagag tatgaa caacgc tgttga Figure 1: Traditional k-anonymity (in Gk ) for n = 6, d = 4, δ = 2 achieves #(∗) = 10. [sent-122, score-0.385]
32 Left to right: usernames with data (x, X), compatibility graph (G) and anonymized data with keys (Y, y). [sent-123, score-0.408]
33 username alice bob carol dave eve fred key 1 0 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 0 1 1 0 1 * * * * 1 0 0 * 0 * * * 0 0 1 1 0 1 0 0 1 1 0 1 ggacta tacaga ctagag tatgaa caacgc tgttga Figure 2: The b-matching anonymity (in Gb ) for n = 6, d = 4, δ = 2 achieves #(∗) = 8. [sent-124, score-1.109]
34 Left to right: usernames with data (x, X), compatibility graph (G) and anonymized data with keys (Y, y). [sent-125, score-0.408]
35 3 we find that the minimum number of stars to achieve this type of anonymity is #(∗) = 10. [sent-133, score-0.832]
36 It was possible to find a smaller number of stars since δ-regular bipartite graphs are a relaxation of k-clique graphs as shown in figure 1. [sent-137, score-0.372]
37 These algorithms operate by finding a graph in Gb or Gs and achieve similar protection as k-anonymity (which finds a graph in the most restrictive family Gk and therefore requires more stars). [sent-149, score-0.314]
38 We provide an algorithm for the b-matching anonymity problem with approximation √ ratio of δ and runtime of O(δm n) where n is the number of users in the data, δ is the largest anonymity level in {δ1 , . [sent-153, score-1.518]
39 One algorithm solves for minimum weight bipartite b-matchings which is easy to implement using linear programming, max-flow methods or belief propagation in the bipartite case [9, 11]. [sent-157, score-0.382]
40 Thus, the set of possible output solutions is strictly smaller (the bipartite formulation relaxes the symmetric one). [sent-175, score-0.297]
41 Both algorithms4 manipulate a bipartite regular graph G(A, B) containing the true matching {(a1 , b1 ), . [sent-184, score-0.445]
42 We now discuss how an adversary can attack privacy by recovering this matching or parts of it. [sent-196, score-0.761]
43 4 Privacy guarantees We now characterize the anonymity provided by a compatibility graph G ∈ Gb (or G ∈ Gs ) under several attack models. [sent-197, score-1.057]
44 In other words, the adversary wishes to find the random matching M used in the algorithms (or parts of M ) to connect the entries of X to the entries of Ypublic (assuming the adversary has stolen X and Ypublic or portions of them). [sent-199, score-0.672]
45 More precisely, we have a bipartite graph G(A, B) with color classes A, B, each of size n. [sent-200, score-0.332]
46 The latter is especially important if we are interested in guaranteeing different levels of privacy for different users and allowing δ to vary with the user’s index i. [sent-204, score-0.324]
47 Sometimes it is the case that the adversary has some additional information and at the very beginning knows some complete records that belong to some people. [sent-205, score-0.328]
48 In graph-theoretic terms, the adversary thus knows parts of the hidden matching M in advance. [sent-206, score-0.436]
49 Alternatively, the adversary may have come across such additional information through sustained attack where previous attempts revealed the presence or absence of an edge. [sent-207, score-0.474]
50 1 One-Time Attack Guarantees Assume first that the adversary has no extra information about the matching and performs a one-time attack. [sent-214, score-0.364]
51 1 If G(A, B) is an arbitrary δ-regular graph and the adversary does not know any edges of the matching he is looking for then every person is δ-anonymous. [sent-219, score-0.627]
52 Thus, for a single attack, b-matching anonymity (symmetric or asymmetric) is equivalent to k-anonymity when b = k. [sent-227, score-0.724]
53 1 Assume the bipartite graph G(A, B) is either δ-regular, symmetric δ-regular or clique-bipartite and δ-regular. [sent-229, score-0.415]
54 Here, the adversary may know c ∈ N edges in M a priori by whatever means (previous attacks or through side information). [sent-233, score-0.393]
55 In the clique-bipartite graph, even if the adversary knows some edges of the matching (but not too many) then there still is hope of good anonymity for all people. [sent-235, score-1.233]
56 The anonymity of every person decreases from δ to at least (δ − c). [sent-236, score-0.774]
57 So, for example, if the adversary knows in advance δ 2 edges of the matching then we get the same type of anonymity for every person as for the model with two times smaller degree in which the adversary has no extra knowledge. [sent-237, score-1.617]
58 3 If G(A, B) is clique-bipartite δ-regular graph and the adversary knows in advance c edges of the matching then every person is (δ − c)-anonymous. [sent-239, score-0.756]
59 Then there are at least (δ − c) edges adjacent to v such that, for each of these edges e, there exists some perfect matching M e in G(A, B) that uses both e and C. [sent-246, score-0.311]
60 Assume that the adversary knows in advance c edges of the matching. [sent-249, score-0.452]
61 The adversary selects uniformly at random a vertex the privacy of which he wants to break from the set of vertices he does not know in advance. [sent-250, score-0.574]
62 3 Sustained attack on asymmetric bipartite b-matching We now consider the case where we do not have a graph G(A, B) which is clique-bipartite but rather is only δ-regular and potentially asymmetric (as returned by algorithm 1). [sent-254, score-0.534]
63 1 Let G(A,B) be a δ-regular bipartite graph with color classes: A and B. [sent-256, score-0.332]
64 1 says that all but at most a small number η of people are (δ − c − φ(δ))- 1 anonymous for every φ satisfying: c 2δ + 4 < φ(δ) < δ if the adversary knows in advance c edges of the matching. [sent-265, score-0.479]
65 Fix ξ = c and assume that 1 the adversary knows in advance at most δ 4 edges of the matching. [sent-267, score-0.452]
66 1, we obtain that (for n large enough) all but at most 1 4 4n δ 1 4 θ3 1 + δ4 θ people from those that the adversary does not know in advance are ((1 − θ)δ − δ )-anonymous. [sent-269, score-0.356]
67 2 Assume graph G(A, B) is δ-regular and the adversary knows in advance c edges of the matching, where c satisfies: 1 ≤ c ≤ min( δ , δ(1 − θ − θ2 )). [sent-281, score-0.57]
68 The adversary selects uniformly at 4 random a vertex the privacy of which he wants to break from those that he does not know in advance. [sent-282, score-0.574]
69 4 Sustained attack on symmetric b-matching with adaptive anonymity We now consider the case where the graph is not only δ-regular but also symmetric as defined in definition 2. [sent-285, score-1.228]
70 Furthermore, we consider the case where we have varying values of δi for each node since some users want higher privacy than others. [sent-287, score-0.305]
71 It turns out that if the corresponding bipartite graph is symmetric (we define this term below) we can conclude that each user is (δi − c)-anonymous, where δi is the degree of a vertex associated with the user of the bipartite matching graph. [sent-288, score-0.969]
72 1 Let G(A, B) be a bipartite graph with color classes: A, B and matching M = {(a1 , b1 ), . [sent-292, score-0.468]
73 From now on, the matching M with respect to which G(A, B) is symmetric is a canonical matching of G(A, B). [sent-303, score-0.378]
74 In such a case, we will prove that, if the adversary knows in advance c edges of the matching, then every person from the class A of degree δi is (δi − c)anonymous. [sent-305, score-0.529]
75 So we obtain the same type of anonymity as in a clique-bipartite graph (see: lemma 4. [sent-306, score-0.842]
76 5 Assume that G(A, B) is a bipartite graph, symmetric with respect to its canonical matching M . [sent-309, score-0.433]
77 Assume furthermore that the adversary knows in advance c edges of the matching. [sent-310, score-0.452]
78 Then every person that he does not know in advance is (δi − c)-anonymous, where δi is a degree of the related vertex of the bipartite graph. [sent-311, score-0.407]
79 As a corollary, we obtain the same privacy guarantees in the symmetric case as the k-cliques case. [sent-312, score-0.362]
80 3 Assume bipartite graph G(A, B) is symmetric with respect to its canonical matchings M . [sent-314, score-0.415]
81 Assume that the adversary knows in advance c edges of the matching. [sent-315, score-0.452]
82 The adversary selects uniformly at random a vertex the privacy of which he wants to break from the set of vertices he does not know in advance. [sent-316, score-0.574]
83 Then he succeeds with probability at most δi1 , where δi is a degree of a −c vertex of the matching graph associated with the user. [sent-317, score-0.37]
84 5 A symmetric graph G(A, B) may not remain symmetric according to definition 2. [sent-318, score-0.33]
85 7 In summary, the symmetric case is as resilient to sustained attack as the cliques-bipartite case, the usual one underlying k-anonymity if we set δi = δ = k everywhere. [sent-322, score-0.352]
86 4 b−matching b−symmetric k−anonymity 5 10 anonymity 15 0. [sent-339, score-0.724]
87 4 b−matching b−symmetric k−anonymity 5 10 anonymity 15 0. [sent-341, score-0.724]
88 7 5 20 b−matching b−symmetric k−anonymity 10 15 20 anonymity 25 30 25 30 0. [sent-344, score-0.724]
89 7 0 b−matching b−symmetric k−anonymity 5 10 anonymity 0. [sent-348, score-0.724]
90 6 b−matching b−symmetric k−anonymity 5 10 anonymity 0. [sent-363, score-0.724]
91 8 b−matching b−symmetric k−anonymity 5 10 anonymity 15 20 0. [sent-366, score-0.724]
92 Both algorithms release data with suppressions to achieve a desired constant anonymity level δ. [sent-370, score-0.853]
93 Algorithms 1 achieved significantly better utility for any given fixed constant anonymity level δ while algorithm 2 achieved a slight improvement. [sent-383, score-0.87]
94 We next explored Facebook social data experiments where each user has a different level of desired anonymity and has 7 discrete profile attributes which were binarized into d = 101 dimensions. [sent-384, score-0.9]
95 We used the number of friends fi a user has to compute their desired anonymity level (which decreases as the number of friends increases). [sent-385, score-0.85]
96 n log fi and, for each value of δ in the plot, we set user i’s privacy level to δi = δ − (F − log fi ). [sent-389, score-0.358]
97 Algorithms 1 and 2 consistently achieved significantly better utility in the adaptive anonymity setting while also achieving the desired level of privacy protection. [sent-392, score-1.183]
98 6 Discussion We described the adaptive anonymity problem where data is obfuscated to respect each individual user’s privacy settings. [sent-393, score-1.092]
99 It yields similar privacy protection while offering greater utility and the ability to handle heterogeneous anonymity levels for each user. [sent-395, score-1.202]
100 Preserving the privacy of sensitive relationships in graph data. [sent-538, score-0.374]
wordName wordTfidf (topN-words)
[('anonymity', 0.724), ('privacy', 0.256), ('adversary', 0.228), ('bipartite', 0.191), ('attack', 0.141), ('matching', 0.136), ('utility', 0.125), ('gij', 0.118), ('graph', 0.118), ('symmetric', 0.106), ('sustained', 0.105), ('xjk', 0.093), ('username', 0.092), ('usernames', 0.092), ('stars', 0.088), ('bn', 0.087), ('keys', 0.081), ('user', 0.081), ('xik', 0.081), ('obfuscated', 0.079), ('advance', 0.079), ('gb', 0.078), ('compatibility', 0.074), ('edges', 0.073), ('knows', 0.072), ('attacks', 0.07), ('resilience', 0.07), ('ypublic', 0.066), ('copies', 0.064), ('gs', 0.062), ('released', 0.06), ('protection', 0.058), ('succeeds', 0.051), ('person', 0.05), ('users', 0.049), ('database', 0.048), ('gk', 0.048), ('compatible', 0.046), ('anonymized', 0.043), ('asymmetric', 0.042), ('entries', 0.04), ('suppressions', 0.04), ('article', 0.039), ('vertex', 0.038), ('wik', 0.037), ('graphs', 0.034), ('yj', 0.033), ('adaptive', 0.033), ('attributes', 0.031), ('wants', 0.03), ('entry', 0.03), ('ming', 0.03), ('perfect', 0.029), ('agglomerative', 0.029), ('records', 0.028), ('personal', 0.027), ('degree', 0.027), ('people', 0.027), ('anonymizing', 0.026), ('caacgc', 0.026), ('carol', 0.026), ('ctagag', 0.026), ('ggacta', 0.026), ('gji', 0.026), ('paranoid', 0.026), ('tacaga', 0.026), ('tatgaa', 0.026), ('tgttga', 0.026), ('forest', 0.026), ('permutation', 0.025), ('relaxation', 0.025), ('release', 0.024), ('edge', 0.024), ('desired', 0.024), ('existence', 0.023), ('blossom', 0.023), ('alice', 0.023), ('cormode', 0.023), ('eve', 0.023), ('fred', 0.023), ('protect', 0.023), ('yik', 0.023), ('facebook', 0.023), ('color', 0.023), ('know', 0.022), ('symbols', 0.022), ('icde', 0.021), ('bob', 0.021), ('dave', 0.021), ('releasing', 0.021), ('columbia', 0.021), ('level', 0.021), ('handle', 0.02), ('pods', 0.02), ('achieve', 0.02), ('personalized', 0.019), ('families', 0.019), ('levels', 0.019), ('matched', 0.019), ('explored', 0.019)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999946 25 nips-2013-Adaptive Anonymity via $b$-Matching
Author: Krzysztof M. Choromanski, Tony Jebara, Kui Tang
Abstract: The adaptive anonymity problem is formalized where each individual shares their data along with an integer value to indicate their personal level of desired privacy. This problem leads to a generalization of k-anonymity to the b-matching setting. Novel algorithms and theory are provided to implement this type of anonymity. The relaxation achieves better utility, admits theoretical privacy guarantees that are as strong, and, most importantly, accommodates a variable level of anonymity for each individual. Empirical results confirm improved utility on benchmark and social data-sets.
2 0.20104286 177 nips-2013-Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation
Author: John Duchi, Martin J. Wainwright, Michael Jordan
Abstract: We provide a detailed study of the estimation of probability distributions— discrete and continuous—in a stringent setting in which data is kept private even from the statistician. We give sharp minimax rates of convergence for estimation in these locally private settings, exhibiting fundamental trade-offs between privacy and convergence rate, as well as providing tools to allow movement along the privacy-statistical efficiency continuum. One of the consequences of our results is that Warner’s classical work on randomized response is an optimal way to perform survey sampling while maintaining privacy of the respondents. 1
3 0.14003403 14 nips-2013-A Stability-based Validation Procedure for Differentially Private Machine Learning
Author: Kamalika Chaudhuri, Staal A. Vinterbo
Abstract: Differential privacy is a cryptographically motivated definition of privacy which has gained considerable attention in the algorithms, machine-learning and datamining communities. While there has been an explosion of work on differentially private machine learning algorithms, a major barrier to achieving end-to-end differential privacy in practical machine learning applications is the lack of an effective procedure for differentially private parameter tuning, or, determining the parameter value, such as a bin size in a histogram, or a regularization parameter, that is suitable for a particular application. In this paper, we introduce a generic validation procedure for differentially private machine learning algorithms that apply when a certain stability condition holds on the training algorithm and the validation performance metric. The training data size and the privacy budget used for training in our procedure is independent of the number of parameter values searched over. We apply our generic procedure to two fundamental tasks in statistics and machine-learning – training a regularized linear classifier and building a histogram density estimator that result in end-toend differentially private solutions for these problems. 1
4 0.11665791 2 nips-2013-(Nearly) Optimal Algorithms for Private Online Learning in Full-information and Bandit Settings
Author: Abhradeep Guha Thakurta, Adam Smith
Abstract: We give differentially private algorithms for a large class of online learning algorithms, in both the full information and bandit settings. Our algorithms aim to minimize a convex loss function which is a sum of smaller convex loss terms, one for each data point. To design our algorithms, we modify the popular mirror descent approach, or rather a variant called follow the approximate leader. The technique leads to the first nonprivate algorithms for private online learning in the bandit setting. In the full information setting, our algorithms improve over the regret bounds of previous work (due to Dwork, Naor, Pitassi and Rothblum (2010) and Jain, Kothari and Thakurta (2012)). In many cases, our algorithms (in both settings) match the dependence on the input length, T , of the optimal nonprivate regret bounds up to logarithmic factors in T . Our algorithms require logarithmic space and update time. 1
5 0.10997204 282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching
Author: Marcelo Fiori, Pablo Sprechmann, Joshua Vogelstein, Pablo Muse, Guillermo Sapiro
Abstract: Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or collaborative sparsity formulations, as a non-smooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. The method can deal with weighted or unweighted graphs, as well as multimodal data, where different graphs represent different types of data. The proposed approach is also naturally integrated with collaborative graph inference techniques, solving general network inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. The algorithm is tested and compared with state-of-the-art graph matching techniques in both synthetic and real graphs. We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignment-free functional magnetic resonance imaging (fMRI) data. The code is publicly available. 1
6 0.10934352 231 nips-2013-Online Learning with Switching Costs and Other Adaptive Adversaries
7 0.091186091 102 nips-2013-Efficient Algorithm for Privately Releasing Smooth Queries
9 0.073933072 227 nips-2013-Online Learning in Markov Decision Processes with Adversarially Chosen Transition Probability Distributions
10 0.068802953 300 nips-2013-Solving the multi-way matching problem by permutation synchronization
11 0.056519769 7 nips-2013-A Gang of Bandits
12 0.055872619 5 nips-2013-A Deep Architecture for Matching Short Texts
13 0.054557528 207 nips-2013-Near-optimal Anomaly Detection in Graphs using Lovasz Extended Scan Statistic
14 0.054279383 125 nips-2013-From Bandits to Experts: A Tale of Domination and Independence
15 0.044797983 289 nips-2013-Scalable kernels for graphs with continuous attributes
16 0.043734848 151 nips-2013-Learning Chordal Markov Networks by Constraint Satisfaction
17 0.043608546 273 nips-2013-Reinforcement Learning in Robust Markov Decision Processes
18 0.043596391 308 nips-2013-Spike train entropy-rate estimation using hierarchical Dirichlet process priors
19 0.041742876 108 nips-2013-Error-Minimizing Estimates and Universal Entry-Wise Error Bounds for Low-Rank Matrix Completion
20 0.041128047 191 nips-2013-Minimax Optimal Algorithms for Unconstrained Linear Optimization
topicId topicWeight
[(0, 0.112), (1, 0.003), (2, 0.054), (3, -0.045), (4, 0.051), (5, 0.012), (6, 0.015), (7, -0.086), (8, 0.019), (9, 0.184), (10, 0.128), (11, -0.105), (12, 0.006), (13, 0.136), (14, -0.049), (15, 0.021), (16, 0.059), (17, -0.078), (18, -0.064), (19, -0.045), (20, 0.061), (21, -0.075), (22, 0.043), (23, -0.059), (24, -0.005), (25, -0.073), (26, 0.094), (27, 0.04), (28, -0.007), (29, 0.072), (30, -0.027), (31, 0.063), (32, 0.031), (33, 0.027), (34, -0.002), (35, 0.004), (36, -0.059), (37, -0.094), (38, -0.082), (39, 0.049), (40, -0.033), (41, 0.006), (42, 0.031), (43, -0.0), (44, 0.067), (45, -0.021), (46, -0.076), (47, 0.029), (48, -0.033), (49, 0.068)]
simIndex simValue paperId paperTitle
same-paper 1 0.92841268 25 nips-2013-Adaptive Anonymity via $b$-Matching
Author: Krzysztof M. Choromanski, Tony Jebara, Kui Tang
Abstract: The adaptive anonymity problem is formalized where each individual shares their data along with an integer value to indicate their personal level of desired privacy. This problem leads to a generalization of k-anonymity to the b-matching setting. Novel algorithms and theory are provided to implement this type of anonymity. The relaxation achieves better utility, admits theoretical privacy guarantees that are as strong, and, most importantly, accommodates a variable level of anonymity for each individual. Empirical results confirm improved utility on benchmark and social data-sets.
2 0.66007602 14 nips-2013-A Stability-based Validation Procedure for Differentially Private Machine Learning
Author: Kamalika Chaudhuri, Staal A. Vinterbo
Abstract: Differential privacy is a cryptographically motivated definition of privacy which has gained considerable attention in the algorithms, machine-learning and datamining communities. While there has been an explosion of work on differentially private machine learning algorithms, a major barrier to achieving end-to-end differential privacy in practical machine learning applications is the lack of an effective procedure for differentially private parameter tuning, or, determining the parameter value, such as a bin size in a histogram, or a regularization parameter, that is suitable for a particular application. In this paper, we introduce a generic validation procedure for differentially private machine learning algorithms that apply when a certain stability condition holds on the training algorithm and the validation performance metric. The training data size and the privacy budget used for training in our procedure is independent of the number of parameter values searched over. We apply our generic procedure to two fundamental tasks in statistics and machine-learning – training a regularized linear classifier and building a histogram density estimator that result in end-toend differentially private solutions for these problems. 1
3 0.63765907 177 nips-2013-Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation
Author: John Duchi, Martin J. Wainwright, Michael Jordan
Abstract: We provide a detailed study of the estimation of probability distributions— discrete and continuous—in a stringent setting in which data is kept private even from the statistician. We give sharp minimax rates of convergence for estimation in these locally private settings, exhibiting fundamental trade-offs between privacy and convergence rate, as well as providing tools to allow movement along the privacy-statistical efficiency continuum. One of the consequences of our results is that Warner’s classical work on randomized response is an optimal way to perform survey sampling while maintaining privacy of the respondents. 1
4 0.56437099 102 nips-2013-Efficient Algorithm for Privately Releasing Smooth Queries
Author: Ziteng Wang, Kai Fan, Jiaqi Zhang, Liwei Wang
Abstract: We study differentially private mechanisms for answering smooth queries on databases consisting of data points in Rd . A K-smooth query is specified by a function whose partial derivatives up to order K are all bounded. We develop an -differentially private mechanism which for the class of K-smooth queries has K accuracy O(n− 2d+K / ). The mechanism first outputs a summary of the database. To obtain an answer of a query, the user runs a public evaluation algorithm which contains no information of the database. Outputting the summary runs in time d O(n1+ 2d+K ), and the evaluation algorithm for answering a query runs in time d+2+ 2d K ˜ O(n 2d+K ). Our mechanism is based on L∞ -approximation of (transformed) smooth functions by low degree even trigonometric polynomials with small and efficiently computable coefficients. 1
5 0.5402422 282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching
Author: Marcelo Fiori, Pablo Sprechmann, Joshua Vogelstein, Pablo Muse, Guillermo Sapiro
Abstract: Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or collaborative sparsity formulations, as a non-smooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. The method can deal with weighted or unweighted graphs, as well as multimodal data, where different graphs represent different types of data. The proposed approach is also naturally integrated with collaborative graph inference techniques, solving general network inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. The algorithm is tested and compared with state-of-the-art graph matching techniques in both synthetic and real graphs. We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignment-free functional magnetic resonance imaging (fMRI) data. The code is publicly available. 1
6 0.44992241 207 nips-2013-Near-optimal Anomaly Detection in Graphs using Lovasz Extended Scan Statistic
7 0.42960292 87 nips-2013-Density estimation from unweighted k-nearest neighbor graphs: a roadmap
8 0.42604998 2 nips-2013-(Nearly) Optimal Algorithms for Private Online Learning in Full-information and Bandit Settings
9 0.40542495 8 nips-2013-A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles
10 0.3978442 134 nips-2013-Graphical Models for Inference with Missing Data
11 0.38886392 300 nips-2013-Solving the multi-way matching problem by permutation synchronization
12 0.37428391 328 nips-2013-The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited
13 0.3675769 332 nips-2013-Tracking Time-varying Graphical Structure
14 0.35703969 35 nips-2013-Analyzing the Harmonic Structure in Graph-Based Learning
15 0.35165492 290 nips-2013-Scoring Workers in Crowdsourcing: How Many Control Questions are Enough?
16 0.34306318 182 nips-2013-Manifold-based Similarity Adaptation for Label Propagation
17 0.34107465 350 nips-2013-Wavelets on Graphs via Deep Learning
18 0.33603707 125 nips-2013-From Bandits to Experts: A Tale of Domination and Independence
19 0.33349901 151 nips-2013-Learning Chordal Markov Networks by Constraint Satisfaction
20 0.33002749 128 nips-2013-Generalized Method-of-Moments for Rank Aggregation
topicId topicWeight
[(2, 0.042), (16, 0.022), (33, 0.093), (34, 0.083), (36, 0.024), (41, 0.038), (49, 0.028), (56, 0.103), (70, 0.045), (85, 0.062), (89, 0.035), (91, 0.281), (93, 0.033), (95, 0.014)]
simIndex simValue paperId paperTitle
1 0.80878574 26 nips-2013-Adaptive Market Making via Online Learning
Author: Jacob Abernethy, Satyen Kale
Abstract: We consider the design of strategies for market making in an exchange. A market maker generally seeks to profit from the difference between the buy and sell price of an asset, yet the market maker also takes exposure risk in the event of large price movements. Profit guarantees for market making strategies have typically required certain stochastic assumptions on the price fluctuations of the asset in question; for example, assuming a model in which the price process is mean reverting. We propose a class of “spread-based” market making strategies whose performance can be controlled even under worst-case (adversarial) settings. We prove structural properties of these strategies which allows us to design a master algorithm which obtains low regret relative to the best such strategy in hindsight. We run a set of experiments showing favorable performance on recent real-world stock price data. 1
same-paper 2 0.74104071 25 nips-2013-Adaptive Anonymity via $b$-Matching
Author: Krzysztof M. Choromanski, Tony Jebara, Kui Tang
Abstract: The adaptive anonymity problem is formalized where each individual shares their data along with an integer value to indicate their personal level of desired privacy. This problem leads to a generalization of k-anonymity to the b-matching setting. Novel algorithms and theory are provided to implement this type of anonymity. The relaxation achieves better utility, admits theoretical privacy guarantees that are as strong, and, most importantly, accommodates a variable level of anonymity for each individual. Empirical results confirm improved utility on benchmark and social data-sets.
3 0.72247112 196 nips-2013-Modeling Overlapping Communities with Node Popularities
Author: Prem Gopalan, Chong Wang, David Blei
Abstract: We develop a probabilistic approach for accurate network modeling using node popularities within the framework of the mixed-membership stochastic blockmodel (MMSB). Our model integrates two basic properties of nodes in social networks: homophily and preferential connection to popular nodes. We develop a scalable algorithm for posterior inference, based on a novel nonconjugate variant of stochastic variational inference. We evaluate the link prediction accuracy of our algorithm on nine real-world networks with up to 60,000 nodes, and on simulated networks with degree distributions that follow a power law. We demonstrate that the AMP predicts significantly better than the MMSB. 1
4 0.5573262 159 nips-2013-Learning Prices for Repeated Auctions with Strategic Buyers
Author: Kareem Amin, Afshin Rostamizadeh, Umar Syed
Abstract: Inspired by real-time ad exchanges for online display advertising, we consider the problem of inferring a buyer’s value distribution for a good when the buyer is repeatedly interacting with a seller through a posted-price mechanism. We model the buyer as a strategic agent, whose goal is to maximize her long-term surplus, and we are interested in mechanisms that maximize the seller’s long-term revenue. We define the natural notion of strategic regret — the lost revenue as measured against a truthful (non-strategic) buyer. We present seller algorithms that are no(strategic)-regret when the buyer discounts her future surplus — i.e. the buyer prefers showing advertisements to users sooner rather than later. We also give a lower bound on strategic regret that increases as the buyer’s discounting weakens and shows, in particular, that any seller algorithm will suffer linear strategic regret if there is no discounting. 1
5 0.53936273 182 nips-2013-Manifold-based Similarity Adaptation for Label Propagation
Author: Masayuki Karasuyama, Hiroshi Mamitsuka
Abstract: Label propagation is one of the state-of-the-art methods for semi-supervised learning, which estimates labels by propagating label information through a graph. Label propagation assumes that data points (nodes) connected in a graph should have similar labels. Consequently, the label estimation heavily depends on edge weights in a graph which represent similarity of each node pair. We propose a method for a graph to capture the manifold structure of input features using edge weights parameterized by a similarity function. In this approach, edge weights represent both similarity and local reconstruction weight simultaneously, both being reasonable for label propagation. For further justification, we provide analytical considerations including an interpretation as a cross-validation of a propagation model in the feature space, and an error analysis based on a low dimensional manifold model. Experimental results demonstrated the effectiveness of our approach both in synthetic and real datasets. 1
6 0.53728467 79 nips-2013-DESPOT: Online POMDP Planning with Regularization
7 0.53693116 125 nips-2013-From Bandits to Experts: A Tale of Domination and Independence
8 0.53599519 282 nips-2013-Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching
9 0.53344125 228 nips-2013-Online Learning of Dynamic Parameters in Social Networks
10 0.53164315 310 nips-2013-Statistical analysis of coupled time series with Kernel Cross-Spectral Density operators.
11 0.53127867 249 nips-2013-Polar Operators for Structured Sparse Estimation
12 0.52916288 116 nips-2013-Fantope Projection and Selection: A near-optimal convex relaxation of sparse PCA
13 0.5286954 104 nips-2013-Efficient Online Inference for Bayesian Nonparametric Relational Models
14 0.52856261 232 nips-2013-Online PCA for Contaminated Data
15 0.5285483 285 nips-2013-Robust Transfer Principal Component Analysis with Rank Constraints
16 0.52843237 184 nips-2013-Marginals-to-Models Reducibility
17 0.52636254 124 nips-2013-Forgetful Bayes and myopic planning: Human learning and decision-making in a bandit setting
18 0.5258382 248 nips-2013-Point Based Value Iteration with Optimal Belief Compression for Dec-POMDPs
19 0.52556503 123 nips-2013-Flexible sampling of discrete data correlations without the marginal distributions
20 0.52534086 278 nips-2013-Reward Mapping for Transfer in Long-Lived Agents