nips nips2012 nips2012-298 knowledge-graph by maker-knowledge-mining

298 nips-2012-Scalable Inference of Overlapping Communities


Source: pdf

Author: Prem Gopalan, Sean Gerrish, Michael Freedman, David M. Blei, David M. Mimno

Abstract: We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Department of Computer Science Princeton University Princeton, NJ 08540 Abstract We develop a scalable algorithm for posterior inference of overlapping communities in large networks. [sent-6, score-0.61]

2 Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). [sent-7, score-0.403]

3 It naturally interleaves subsampling the network with estimating its community structure. [sent-8, score-0.494]

4 It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms. [sent-10, score-0.871]

5 1 Introduction A central problem in network analysis is to identify communities, groups of related nodes with dense internal connections and few external connections [1, 2, 3]. [sent-11, score-0.264]

6 Classical methods for community detection assume that each node participates in a single community [4, 5, 6]. [sent-12, score-0.777]

7 For example, a member of a large social network might belong to overlapping communities of co-workers, neighbors, and school friends. [sent-14, score-0.633]

8 To address this problem, researchers have developed several methods for detecting overlapping communities in observed networks. [sent-15, score-0.484]

9 In this paper, we focus on the mixed-membership stochastic blockmodel (MMSB) [2], a probabilistic model that allows each node of a network to exhibit a mixture of communities. [sent-17, score-0.514]

10 The MMSB casts community detection as posterior inference: Given an observed network, we estimate the posterior community memberships of its nodes. [sent-18, score-0.788]

11 The MMSB can capture complex community structure and has been adapted in several ways [11, 12]; however, its applications have been limited because its corresponding inference algorithms have not scaled to large networks [2]. [sent-19, score-0.4]

12 When compared to other scalable methods for overlapping community detection, we found that the MMSB gives better predictions of new connections and more closely recovers ground-truth communities. [sent-23, score-0.4]

13 The original MMSB algorithm optimizes the variational objective by coordinate ascent, processing every pair of nodes in each iteration [2]. [sent-25, score-0.327]

14 In this paper, we develop stochastic optimization algorithms [13, 14] to fit the variational distribution, where we obtain noisy estimates of the gradient by subsampling the network. [sent-27, score-0.389]

15 *$ $) $ (b) Figure 1: Figure 1(a) shows communities (see §2) discovered in a co-authorship network of 1,600 researchers [16] by an a-MMSB model with 50 communities. [sent-57, score-0.501]

16 The color of author nodes indicates their most likely posterior community membership. [sent-58, score-0.426]

17 The size of nodes indicates bridgeness [17], a measure of participation in multiple communities. [sent-59, score-0.249]

18 Our algorithm alternates between subsampling from the network and adjusting its estimate of the underlying communities. [sent-63, score-0.24]

19 2 Modeling overlapping communities In this section, we introduce the assortative mixed-membership stochastic blockmodel (a-MMSB), a statistical model of networks that models nodes participating in multiple communities. [sent-69, score-0.882]

20 1 Let y denote the observed links of an undirected network, where yab = 1 if nodes a and b are linked and 0 otherwise. [sent-71, score-0.562]

21 Each node a is associated with community memberships πa , a distribution over communities; each community is associated with a community strength βk ∈ (0, 1), which captures how tightly its members are linked. [sent-73, score-1.135]

22 The probability that two nodes are linked is governed by the similarity of their community memberships and the strength of their shared communities. [sent-74, score-0.54]

23 For each community k, draw community strength βk ∼ Beta(η). [sent-77, score-0.532]

24 For each node a, draw community memberships πa ∼ Dirichlet(α). [sent-79, score-0.651]

25 (c) Draw link yab ∼ Bernoulli(r), where r= βk if za→b,k = za←b,k = 1, if za→b = za←b . [sent-83, score-0.321]

26 (1) 1 We use a subclass of the MMSB models that is appropriate for community detection in undirected networks. [sent-84, score-0.317]

27 In §2 we argue why the a-MMSB is more appropriate for community detection than the MMSB. [sent-89, score-0.291]

28 This captures assortativity—if two nodes are linked, it is likely that the latent community indicators were the same. [sent-93, score-0.375]

29 When the full MMSB is applied to undirected networks, two hypotheses compete to explain a link between each pair of nodes: either both nodes exhibit the same community or they are in different communities that link to each other. [sent-95, score-1.081]

30 The posterior lets us form a predictive distribution of unseen links and measure latent network properties of the observed data. [sent-97, score-0.389]

31 The posterior over π1:N represents the community memberships of the nodes, and the posterior over the interaction indicator variables z identifies link communities in the network [8]. [sent-98, score-1.123]

32 For example, in a social network one member’s link to another might arise because they are from the same high school while another might arise because they are co-workers. [sent-99, score-0.267]

33 In Figure 1(a), we sized author nodes according to their expected posterior bridgeness [17], a measure of participation in multiple communities (see §5). [sent-101, score-0.663]

34 In the context of the MMSB (and the a-MMSB), coordinate ascent iterates between analyzing all O(N 2 ) node pairs and updating the community memberships of the N nodes [2]. [sent-105, score-0.845]

35 In this section, we will derive a stochastic variational inference algorithm. [sent-106, score-0.235]

36 Our algorithm iterates between sampling random pairs of nodes and updating node memberships. [sent-107, score-0.456]

37 (2) The posterior over link community assignments z is parameterized by the per-interaction memberships φ, the node community distributions π by the community memberships γ, and the link probability β by the community strengths λ. [sent-115, score-1.849]

38 4 contains summations over communities and nodes; we call these global terms. [sent-122, score-0.455]

39 They relate to the global variables, which are the community strengths λ and per-node memberships γ. [sent-123, score-0.475]

40 The remaining lines contain summations over all node pairs, which we call local terms. [sent-124, score-0.304]

41 We will use stochastic variational inference [14], which optimizes the ELBO with respect to the global variational parameters using stochastic gradient ascent. [sent-129, score-0.504]

42 Stochastic variational inference is a coordinate ascent algorithm that iteratively updates local and global parameters. [sent-134, score-0.273]

43 For each iteration, we first subsample the network and compute optimal local parameters for the sample, given the current settings of the global parameters. [sent-135, score-0.25]

44 The selection of subsamples in each iteration provides a way to plug in a variety of network subsampling algorithms. [sent-138, score-0.311]

45 However, to maintain a correct stochastic optimization algorithm of the variational objective, the subsampling method must be valid. [sent-139, score-0.311]

46 The global step updates the global community strengths λ and community memberships γ with a stochastic gradient of the ELBO in Eq. [sent-142, score-0.897]

47 4 contains summations over all O(N 2 ) node pairs. [sent-145, score-0.274]

48 Now consider drawing a node pair (a, b) at random from a population distribution g(a, b) over the M = N (N − 1)/2 node pairs. [sent-146, score-0.567]

49 4 for a node pair sampled from g gives a noisy but unbiased estimate of the ELBO. [sent-152, score-0.415]

50 Following [15], the stochastic natural gradients computed from a sample pair (a, b) are t ∂γa,k =αk + 1 t g(a,b) φa→b,k ∂λt =ηk,i + k,i t−1 − γa,k 1 g(a,b) φa→b,k (6) · φa←b,k · yab,i − λt−1 , k,i (7) where yab,0 = yab , and yab,1 = 1−yab . [sent-153, score-0.404]

51 Our algorithm has assumed that the subset of node pairs S are sampled independently. [sent-157, score-0.315]

52 First, we assume that the union of these sets s is the total set of all node pairs, U : U = ∪i si . [sent-163, score-0.257]

53 Let h(t) be a distribution over predefined sets of node pairs. [sent-167, score-0.257]

54 Computing natural gradients (along with subsampling) leads to scalable variational inference algorithms [14]. [sent-170, score-0.239]

55 n=1 k=1 2: while convergence criteria is not met do 3: Sample a subset S of node pairs. [sent-172, score-0.26]

56 8 with respect to the global variational parameters (γ, λ) is a noisy but unbiased estimate of the natural gradient of the ELBO in Eq. [sent-178, score-0.261]

57 Recall that there is a per-interaction membership parameter for each node pair— φa→b and φa←b —representing the posterior approximation of which communities are active in determining whether there is a link. [sent-186, score-0.679]

58 Each iteration subsamples the network and computes the local and global updates. [sent-192, score-0.265]

59 We have derived this algorithm with node pairs sampled from arbitrary population distributions g(a, b) or h(t). [sent-193, score-0.339]

60 We stop training on a network (the training set) when the average change in expected log likelihood on held-out data (the validation set) is less than 0. [sent-199, score-0.236]

61 The test and validation sets used in §5 have equal parts links and non-links, selected randomly from the network. [sent-201, score-0.268]

62 A 50% links validation set poorly represents the severe class imbalance between links and non-links in real-world networks. [sent-202, score-0.432]

63 Therefore, we compute the validation log likelihood at network sparsity by reweighting the average link and non-link log likelihood (estimated from the 50% links validation set) by their respective proportions in the network. [sent-204, score-0.694]

64 Our L-step can be computed in O(nK), where n is the number of node pairs sampled in each iteration. [sent-207, score-0.315]

65 , ones that help us better assess the community structure. [sent-216, score-0.254]

66 Finally, large, real-world networks are often sparse, with links 5 accounting for less than 0. [sent-218, score-0.285]

67 The simplest method is to sample node pairs uniformly at random. [sent-223, score-0.287]

68 We divide the M node pairs into two strata: links and non-links. [sent-236, score-0.476]

69 If the non-link stratum is sampled, and N0 is the estimated total number of non-links, then g(a, b) = 1 N0 if yab = 0, if yab = 1 0 (11) The population distribution when the link strata is sampled is symmetric. [sent-238, score-0.607]

70 This method combines set-based sampling and stratified sampling to focus on observed links in local neighborhoods. [sent-240, score-0.315]

71 Since the number of non-links associated with each node is usually large, dividing them into many sets allows the computation in each iteration to be fast. [sent-242, score-0.283]

72 At each iteration, we first select a random node and either select its link set or sample one of its m non-link sets, uniformly at random. [sent-243, score-0.351]

73 (12) Stratified random node sampling gives the best gains in convergence speed (see §5). [sent-246, score-0.308]

74 [3] described a model of overlapping communities in networks (“the Poisson model”) where the number of links between two nodes is a Poisson random variable. [sent-248, score-0.866]

75 Recently, other researchers have proposed latent feature network models [20, 21] that compute the probabilities of links based on the interactions between binary features associated with each node. [sent-249, score-0.327]

76 Compared to CP and LC, which do not provide predictions, we will show that the MMSB more reliably recovers the true community structure. [sent-260, score-0.254]

77 N is the number of nodes, K max is the maximum number of communities analyzed and d is the percent of node pairs that are links. [sent-264, score-0.65]

78 The time until convergence for the different methods stoch batch stoch are Tc and Tc , while the time required for 90% of the perplexity at a-MMSB’s convergence is T90% . [sent-265, score-0.338]

79 Stratified random node sampling is an order of magnitude faster than other sampling methods on the hep-ph, astro-ph and hep-th2 networks (Bottom). [sent-310, score-0.424]

80 We set aside a validation and a test set, each having 10% of the network links and an equal number of non-links (see §3. [sent-314, score-0.357]

81 We approximate the probability that a link exists between two nodes using posterior expectations of β and π. [sent-316, score-0.291]

82 We then calculate perplexity, which is the exponential of the average predictive log likelihood of held-out node pairs. [sent-317, score-0.3]

83 For the stratified random node sampling, we set the number of non-link sets m = 10. [sent-319, score-0.257]

84 Figure 2 shows the time to convergence for four networks3 of varying types, node sizes N and sparsity d. [sent-326, score-0.26]

85 Figure 3 shows that stratified random node sampling converges an order of magnitude faster than random node sampling. [sent-333, score-0.512]

86 It is statistically more efficient because the observations in each iteration include all the links of a node and a random sample of its non-links. [sent-334, score-0.447]

87 AM can recover communities with equal or better accuracy than the best scalable algorithms: the Poisson model (PM) [3], Clique percolation (CP) [7] and Link clustering (LC) [8]. [sent-356, score-0.472]

88 We measure the ability of algorithms to recover overlapping communities in synthetic networks generated by the benchmark tool [28]. [sent-357, score-0.556]

89 4 Our synthetic networks reflect real-world networks by modeling noisy links and by varying community densities from sparse to dense. [sent-358, score-0.679]

90 We evaluate using normalized mutual information (NMI) between discovered communities and the ground truth communities [28]. [sent-359, score-0.726]

91 AM outperforms PM, LC, and CP on noisy networks and networks with sparse communities, and it matches the best performance in the noiseless case and the dense case. [sent-365, score-0.265]

92 In addition to the co-authorship network in Figure 1(a), we analyzed the “cond-mat” collaboration network [26] with the number of communities set to 300. [sent-376, score-0.618]

93 In the supplement, we visualized the top authors in the network by a measure of their participation in different communities (bridgeness [17]). [sent-378, score-0.532]

94 4 We generated 280 networks for combinations of these parameters: #nodes∈ {400}; #communities∈{5, 10}; #nodes with at least 3 overlapping communities∈{100}; community N sizes∈{equal, unequal}, when unequal, the community sizes are in the range [ 2K , 2N ]; average node K N N N N degree∈ {0. [sent-384, score-0.933]

95 4 K }, the maximum node degree=2×average node degree; % links of a node that are noisy∈ {0, 0. [sent-390, score-0.885]

96 Efficient and principled method for detecting communities in networks. [sent-410, score-0.363]

97 Finding community structure in networks using the eigenvectors of matrices. [sent-484, score-0.35]

98 Fuzzy communities and the concept of bridgeness in complex networks. [sent-487, score-0.436]

99 Modeling social networks with node attributes using the multiplicative attribute graph model. [sent-511, score-0.362]

100 Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities. [sent-553, score-0.388]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('communities', 0.363), ('mmsb', 0.34), ('za', 0.269), ('community', 0.254), ('node', 0.232), ('yab', 0.202), ('pm', 0.197), ('links', 0.189), ('eq', 0.166), ('strati', 0.141), ('memberships', 0.141), ('cp', 0.138), ('lc', 0.131), ('hep', 0.129), ('subsampling', 0.126), ('nodes', 0.121), ('link', 0.119), ('network', 0.114), ('collab', 0.11), ('elbo', 0.11), ('stratified', 0.11), ('variational', 0.101), ('overlapping', 0.097), ('networks', 0.096), ('ph', 0.09), ('perplexity', 0.09), ('blockmodel', 0.084), ('stochastic', 0.084), ('pair', 0.079), ('ammsb', 0.073), ('bridgeness', 0.073), ('stoch', 0.073), ('percolation', 0.06), ('tc', 0.056), ('subsample', 0.056), ('astro', 0.055), ('participation', 0.055), ('pairs', 0.055), ('validation', 0.054), ('nmi', 0.053), ('clique', 0.051), ('posterior', 0.051), ('inference', 0.05), ('global', 0.05), ('scalable', 0.049), ('sampling', 0.048), ('batch', 0.046), ('subsamples', 0.045), ('newman', 0.045), ('noisy', 0.044), ('log', 0.044), ('ascent', 0.042), ('summations', 0.042), ('physical', 0.042), ('gradients', 0.039), ('blei', 0.037), ('detection', 0.037), ('assortative', 0.037), ('assortativity', 0.037), ('netscience', 0.037), ('santo', 0.037), ('tams', 0.037), ('lets', 0.035), ('poisson', 0.034), ('gradient', 0.034), ('social', 0.034), ('membership', 0.033), ('strata', 0.032), ('exploratory', 0.032), ('unbiased', 0.032), ('hours', 0.032), ('interaction', 0.03), ('strengths', 0.03), ('local', 0.03), ('nds', 0.029), ('dense', 0.029), ('reweighting', 0.028), ('enron', 0.028), ('sampled', 0.028), ('convergence', 0.028), ('held', 0.027), ('andrea', 0.027), ('kleinberg', 0.027), ('collaboration', 0.027), ('undirected', 0.026), ('iteration', 0.026), ('unequal', 0.026), ('figures', 0.025), ('member', 0.025), ('sets', 0.025), ('beta', 0.024), ('population', 0.024), ('draw', 0.024), ('likelihood', 0.024), ('linked', 0.024), ('researchers', 0.024), ('leskovec', 0.024), ('dirichlet', 0.024), ('academy', 0.023), ('fienberg', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999917 298 nips-2012-Scalable Inference of Overlapping Communities

Author: Prem Gopalan, Sean Gerrish, Michael Freedman, David M. Blei, David M. Mimno

Abstract: We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms. 1

2 0.29723245 253 nips-2012-On Triangular versus Edge Representations --- Towards Scalable Modeling of Networks

Author: Qirong Ho, Junming Yin, Eric P. Xing

Abstract: In this paper, we argue for representing networks as a bag of triangular motifs, particularly for important network problems that current model-based approaches handle poorly due to computational bottlenecks incurred by using edge representations. Such approaches require both 1-edges and 0-edges (missing edges) to be provided as input, and as a consequence, approximate inference algorithms for these models usually require Ω(N 2 ) time per iteration, precluding their application to larger real-world networks. In contrast, triangular modeling requires less computation, while providing equivalent or better inference quality. A triangular motif is a vertex triple containing 2 or 3 edges, and the number of such motifs is 2 Θ( i Di ) (where Di is the degree of vertex i), which is much smaller than N 2 for low-maximum-degree networks. Using this representation, we develop a novel mixed-membership network model and approximate inference algorithm suitable for large networks with low max-degree. For networks with high maximum degree, the triangular motifs can be naturally subsampled in a node-centric fashion, allowing for much faster inference at a small cost in accuracy. Empirically, we demonstrate that our approach, when compared to that of an edge-based model, has faster runtime and improved accuracy for mixed-membership community detection. We conclude with a large-scale demonstration on an N ≈ 280, 000-node network, which is infeasible for network models with Ω(N 2 ) inference cost. 1

3 0.11318778 354 nips-2012-Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes

Author: Michael Bryant, Erik B. Sudderth

Abstract: Variational methods provide a computationally scalable alternative to Monte Carlo methods for large-scale, Bayesian nonparametric learning. In practice, however, conventional batch and online variational methods quickly become trapped in local optima. In this paper, we consider a nonparametric topic model based on the hierarchical Dirichlet process (HDP), and develop a novel online variational inference algorithm based on split-merge topic updates. We derive a simpler and faster variational approximation of the HDP, and show that by intelligently splitting and merging components of the variational posterior, we can achieve substantially better predictions of test data than conventional online and batch variational algorithms. For streaming analysis of large datasets where batch analysis is infeasible, we show that our split-merge updates better capture the nonparametric properties of the underlying model, allowing continual learning of new topics.

4 0.1129322 194 nips-2012-Learning to Discover Social Circles in Ego Networks

Author: Jure Leskovec, Julian J. Mcauley

Abstract: Our personal social networks are big and cluttered, and currently there is no good way to organize them. Social networking sites allow users to manually categorize their friends into social circles (e.g. ‘circles’ on Google+, and ‘lists’ on Facebook and Twitter), however they are laborious to construct and must be updated whenever a user’s network grows. We define a novel machine learning task of identifying users’ social circles. We pose the problem as a node clustering problem on a user’s ego-network, a network of connections between her friends. We develop a model for detecting circles that combines network structure as well as user profile information. For each circle we learn its members and the circle-specific user profile similarity metric. Modeling node membership to multiple circles allows us to detect overlapping as well as hierarchically nested circles. Experiments show that our model accurately identifies circles on a diverse set of data from Facebook, Google+, and Twitter for all of which we obtain hand-labeled ground-truth. 1

5 0.11043096 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

Author: Chong Wang, David M. Blei

Abstract: We present a truncation-free stochastic variational inference algorithm for Bayesian nonparametric models. While traditional variational inference algorithms require truncations for the model or the variational distribution, our method adapts model complexity on the fly. We studied our method with Dirichlet process mixture models and hierarchical Dirichlet process topic models on two large data sets. Our method performs better than previous stochastic variational inference algorithms. 1

6 0.11035477 182 nips-2012-Learning Networks of Heterogeneous Influence

7 0.10703409 172 nips-2012-Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs

8 0.092852205 36 nips-2012-Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions

9 0.088645861 123 nips-2012-Exponential Concentration for Mutual Information Estimation with Application to Forests

10 0.082496434 215 nips-2012-Minimizing Uncertainty in Pipelines

11 0.08139877 58 nips-2012-Bayesian models for Large-scale Hierarchical Classification

12 0.07429564 100 nips-2012-Discriminative Learning of Sum-Product Networks

13 0.073921181 191 nips-2012-Learning the Architecture of Sum-Product Networks Using Clustering on Variables

14 0.073826484 129 nips-2012-Fast Variational Inference in the Conjugate Exponential Family

15 0.069193698 180 nips-2012-Learning Mixtures of Tree Graphical Models

16 0.06777446 199 nips-2012-Link Prediction in Graphs with Autoregressive Features

17 0.065209016 316 nips-2012-Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models

18 0.061935458 346 nips-2012-Topology Constraints in Graphical Models

19 0.061788067 77 nips-2012-Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models

20 0.061088737 74 nips-2012-Collaborative Gaussian Processes for Preference Learning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.166), (1, 0.047), (2, -0.018), (3, -0.027), (4, -0.143), (5, -0.022), (6, -0.022), (7, -0.081), (8, -0.103), (9, 0.052), (10, 0.03), (11, 0.068), (12, 0.015), (13, -0.006), (14, -0.035), (15, 0.039), (16, -0.039), (17, -0.025), (18, 0.004), (19, 0.04), (20, -0.071), (21, 0.034), (22, 0.152), (23, -0.028), (24, -0.022), (25, -0.158), (26, 0.026), (27, 0.14), (28, -0.101), (29, -0.075), (30, -0.053), (31, -0.009), (32, -0.171), (33, -0.024), (34, 0.016), (35, -0.073), (36, 0.139), (37, -0.069), (38, -0.03), (39, -0.003), (40, 0.026), (41, -0.008), (42, 0.015), (43, -0.129), (44, 0.048), (45, 0.037), (46, 0.261), (47, -0.103), (48, -0.042), (49, -0.053)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94541323 298 nips-2012-Scalable Inference of Overlapping Communities

Author: Prem Gopalan, Sean Gerrish, Michael Freedman, David M. Blei, David M. Mimno

Abstract: We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms. 1

2 0.85262525 253 nips-2012-On Triangular versus Edge Representations --- Towards Scalable Modeling of Networks

Author: Qirong Ho, Junming Yin, Eric P. Xing

Abstract: In this paper, we argue for representing networks as a bag of triangular motifs, particularly for important network problems that current model-based approaches handle poorly due to computational bottlenecks incurred by using edge representations. Such approaches require both 1-edges and 0-edges (missing edges) to be provided as input, and as a consequence, approximate inference algorithms for these models usually require Ω(N 2 ) time per iteration, precluding their application to larger real-world networks. In contrast, triangular modeling requires less computation, while providing equivalent or better inference quality. A triangular motif is a vertex triple containing 2 or 3 edges, and the number of such motifs is 2 Θ( i Di ) (where Di is the degree of vertex i), which is much smaller than N 2 for low-maximum-degree networks. Using this representation, we develop a novel mixed-membership network model and approximate inference algorithm suitable for large networks with low max-degree. For networks with high maximum degree, the triangular motifs can be naturally subsampled in a node-centric fashion, allowing for much faster inference at a small cost in accuracy. Empirically, we demonstrate that our approach, when compared to that of an edge-based model, has faster runtime and improved accuracy for mixed-membership community detection. We conclude with a large-scale demonstration on an N ≈ 280, 000-node network, which is infeasible for network models with Ω(N 2 ) inference cost. 1

3 0.66804367 194 nips-2012-Learning to Discover Social Circles in Ego Networks

Author: Jure Leskovec, Julian J. Mcauley

Abstract: Our personal social networks are big and cluttered, and currently there is no good way to organize them. Social networking sites allow users to manually categorize their friends into social circles (e.g. ‘circles’ on Google+, and ‘lists’ on Facebook and Twitter), however they are laborious to construct and must be updated whenever a user’s network grows. We define a novel machine learning task of identifying users’ social circles. We pose the problem as a node clustering problem on a user’s ego-network, a network of connections between her friends. We develop a model for detecting circles that combines network structure as well as user profile information. For each circle we learn its members and the circle-specific user profile similarity metric. Modeling node membership to multiple circles allows us to detect overlapping as well as hierarchically nested circles. Experiments show that our model accurately identifies circles on a diverse set of data from Facebook, Google+, and Twitter for all of which we obtain hand-labeled ground-truth. 1

4 0.60154152 346 nips-2012-Topology Constraints in Graphical Models

Author: Marcelo Fiori, Pablo Musé, Guillermo Sapiro

Abstract: Graphical models are a very useful tool to describe and understand natural phenomena, from gene expression to climate change and social interactions. The topological structure of these graphs/networks is a fundamental part of the analysis, and in many cases the main goal of the study. However, little work has been done on incorporating prior topological knowledge onto the estimation of the underlying graphical models from sample data. In this work we propose extensions to the basic joint regression model for network estimation, which explicitly incorporate graph-topological constraints into the corresponding optimization approach. The first proposed extension includes an eigenvector centrality constraint, thereby promoting this important prior topological property. The second developed extension promotes the formation of certain motifs, triangle-shaped ones in particular, which are known to exist for example in genetic regulatory networks. The presentation of the underlying formulations, which serve as examples of the introduction of topological constraints in network estimation, is complemented with examples in diverse datasets demonstrating the importance of incorporating such critical prior knowledge. 1

5 0.57399333 345 nips-2012-Topic-Partitioned Multinetwork Embeddings

Author: Peter Krafft, Juston Moore, Bruce Desmarais, Hanna M. Wallach

Abstract: We introduce a new Bayesian admixture model intended for exploratory analysis of communication networks—specifically, the discovery and visualization of topic-specific subnetworks in email data sets. Our model produces principled visualizations of email networks, i.e., visualizations that have precise mathematical interpretations in terms of our model and its relationship to the observed data. We validate our modeling assumptions by demonstrating that our model achieves better link prediction performance than three state-of-the-art network models and exhibits topic coherence comparable to that of latent Dirichlet allocation. We showcase our model’s ability to discover and visualize topic-specific communication patterns using a new email data set: the New Hanover County email network. We provide an extensive analysis of these communication patterns, leading us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, we advocate for principled visualization as a primary objective in the development of new network models. 1

6 0.56512833 182 nips-2012-Learning Networks of Heterogeneous Influence

7 0.5538317 132 nips-2012-Fiedler Random Fields: A Large-Scale Spectral Approach to Statistical Network Modeling

8 0.52402824 129 nips-2012-Fast Variational Inference in the Conjugate Exponential Family

9 0.47594398 215 nips-2012-Minimizing Uncertainty in Pipelines

10 0.46162698 354 nips-2012-Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes

11 0.4559586 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

12 0.44493127 58 nips-2012-Bayesian models for Large-scale Hierarchical Classification

13 0.44410273 327 nips-2012-Structured Learning of Gaussian Graphical Models

14 0.43095371 10 nips-2012-A Linear Time Active Learning Algorithm for Link Classification

15 0.41572183 154 nips-2012-How They Vote: Issue-Adjusted Models of Legislative Behavior

16 0.40605962 152 nips-2012-Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints

17 0.40489447 77 nips-2012-Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models

18 0.4040527 302 nips-2012-Scaling MPE Inference for Constrained Continuous Markov Random Fields with Consensus Optimization

19 0.39085653 137 nips-2012-From Deformations to Parts: Motion-based Segmentation of 3D Objects

20 0.38614771 103 nips-2012-Distributed Probabilistic Learning for Camera Networks with Missing Data


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.05), (21, 0.032), (38, 0.091), (42, 0.035), (50, 0.194), (51, 0.015), (53, 0.022), (54, 0.015), (55, 0.072), (59, 0.062), (74, 0.064), (76, 0.118), (80, 0.088), (92, 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.79730606 298 nips-2012-Scalable Inference of Overlapping Communities

Author: Prem Gopalan, Sean Gerrish, Michael Freedman, David M. Blei, David M. Mimno

Abstract: We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms. 1

2 0.71022344 253 nips-2012-On Triangular versus Edge Representations --- Towards Scalable Modeling of Networks

Author: Qirong Ho, Junming Yin, Eric P. Xing

Abstract: In this paper, we argue for representing networks as a bag of triangular motifs, particularly for important network problems that current model-based approaches handle poorly due to computational bottlenecks incurred by using edge representations. Such approaches require both 1-edges and 0-edges (missing edges) to be provided as input, and as a consequence, approximate inference algorithms for these models usually require Ω(N 2 ) time per iteration, precluding their application to larger real-world networks. In contrast, triangular modeling requires less computation, while providing equivalent or better inference quality. A triangular motif is a vertex triple containing 2 or 3 edges, and the number of such motifs is 2 Θ( i Di ) (where Di is the degree of vertex i), which is much smaller than N 2 for low-maximum-degree networks. Using this representation, we develop a novel mixed-membership network model and approximate inference algorithm suitable for large networks with low max-degree. For networks with high maximum degree, the triangular motifs can be naturally subsampled in a node-centric fashion, allowing for much faster inference at a small cost in accuracy. Empirically, we demonstrate that our approach, when compared to that of an edge-based model, has faster runtime and improved accuracy for mixed-membership community detection. We conclude with a large-scale demonstration on an N ≈ 280, 000-node network, which is infeasible for network models with Ω(N 2 ) inference cost. 1

3 0.6855973 215 nips-2012-Minimizing Uncertainty in Pipelines

Author: Nilesh Dalvi, Aditya Parameswaran, Vibhor Rastogi

Abstract: In this paper, we consider the problem of debugging large pipelines by human labeling. We represent the execution of a pipeline using a directed acyclic graph of AND and OR nodes, where each node represents a data item produced by some operator in the pipeline. We assume that each operator assigns a confidence to each of its output data. We want to reduce the uncertainty in the output by issuing queries to a human, where a query consists of checking if a given data item is correct. In this paper, we consider the problem of asking the optimal set of queries to minimize the resulting output uncertainty. We perform a detailed evaluation of the complexity of the problem for various classes of graphs. We give efficient algorithms for the problem for trees, and show that, for a general dag, the problem is intractable. 1

4 0.68420416 235 nips-2012-Natural Images, Gaussian Mixtures and Dead Leaves

Author: Daniel Zoran, Yair Weiss

Abstract: Simple Gaussian Mixture Models (GMMs) learned from pixels of natural image patches have been recently shown to be surprisingly strong performers in modeling the statistics of natural images. Here we provide an in depth analysis of this simple yet rich model. We show that such a GMM model is able to compete with even the most successful models of natural images in log likelihood scores, denoising performance and sample quality. We provide an analysis of what such a model learns from natural images as a function of number of mixture components including covariance structure, contrast variation and intricate structures such as textures, boundaries and more. Finally, we show that the salient properties of the GMM learned from natural images can be derived from a simplified Dead Leaves model which explicitly models occlusion, explaining its surprising success relative to other models. 1 GMMs and natural image statistics models Many models for the statistics of natural image patches have been suggested in recent years. Finding good models for natural images is important to many different research areas - computer vision, biological vision and neuroscience among others. Recently, there has been a growing interest in comparing different aspects of models for natural images such as log-likelihood and multi-information reduction performance, and much progress has been achieved [1,2, 3,4,5, 6]. Out of these results there is one which is particularly interesting: simple, unconstrained Gaussian Mixture Models (GMMs) with a relatively small number of mixture components learned from image patches are extraordinarily good in modeling image statistics [6, 4]. This is a surprising result due to the simplicity of GMMs and their ubiquity. Another surprising aspect of this result is that many of the current models may be thought of as GMMs with an exponential or infinite number of components, having different constraints on the covariance structure of the mixture components. In this work we study the nature of GMMs learned from natural image patches. We start with a thorough comparison to some popular and cutting edge image models. We show that indeed, GMMs are excellent performers in modeling natural image patches. We then analyze what properties of natural images these GMMs capture, their dependence on the number of components in the mixture and their relation to the structure of the world around us. Finally, we show that the learned GMM suggests a strong connection between natural image statistics and a simple variant of the dead leaves model [7, 8] , explicitly modeling occlusions and explaining some of the success of GMMs in modeling natural images. 1 3.5 .,...- ••.......-.-.. -..---'-. 1 ~~6\8161·· -.. .-.. --...--.-- ---..-.- -. --------------MII+··+ilIl ..... .. . . ~ '[25 . . . ---- ] B'II 1_ -- ~2 ;t:: fI 1 - --- ,---- ._.. : 61.5 ..... '

5 0.67981416 265 nips-2012-Parametric Local Metric Learning for Nearest Neighbor Classification

Author: Jun Wang, Alexandros Kalousis, Adam Woznica

Abstract: We study the problem of learning local metrics for nearest neighbor classification. Most previous works on local metric learning learn a number of local unrelated metrics. While this ”independence” approach delivers an increased flexibility its downside is the considerable risk of overfitting. We present a new parametric local metric learning method in which we learn a smooth metric matrix function over the data manifold. Using an approximation error bound of the metric matrix function we learn local metrics as linear combinations of basis metrics defined on anchor points over different regions of the instance space. We constrain the metric matrix function by imposing on the linear combinations manifold regularization which makes the learned metric matrix function vary smoothly along the geodesics of the data manifold. Our metric learning method has excellent performance both in terms of predictive power and scalability. We experimented with several largescale classification problems, tens of thousands of instances, and compared it with several state of the art metric learning methods, both global and local, as well as to SVM with automatic kernel selection, all of which it outperforms in a significant manner. 1

6 0.67829925 272 nips-2012-Practical Bayesian Optimization of Machine Learning Algorithms

7 0.67734915 294 nips-2012-Repulsive Mixtures

8 0.6736514 172 nips-2012-Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs

9 0.67356825 52 nips-2012-Bayesian Nonparametric Modeling of Suicide Attempts

10 0.67317301 188 nips-2012-Learning from Distributions via Support Measure Machines

11 0.67128402 210 nips-2012-Memorability of Image Regions

12 0.67108405 158 nips-2012-ImageNet Classification with Deep Convolutional Neural Networks

13 0.6704464 211 nips-2012-Meta-Gaussian Information Bottleneck

14 0.66930485 168 nips-2012-Kernel Latent SVM for Visual Recognition

15 0.66892153 193 nips-2012-Learning to Align from Scratch

16 0.66828614 306 nips-2012-Semantic Kernel Forests from Multiple Taxonomies

17 0.66820455 104 nips-2012-Dual-Space Analysis of the Sparse Linear Model

18 0.66727698 197 nips-2012-Learning with Recursive Perceptual Representations

19 0.66701657 274 nips-2012-Priors for Diversity in Generative Latent Variable Models

20 0.66489208 229 nips-2012-Multimodal Learning with Deep Boltzmann Machines