nips nips2009 nips2009-174 knowledge-graph by maker-knowledge-mining

174 nips-2009-Nonparametric Latent Feature Models for Link Prediction


Source: pdf

Author: Kurt Miller, Michael I. Jordan, Thomas L. Griffiths

Abstract: As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. [sent-9, score-0.405]

2 The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. [sent-10, score-0.279]

3 In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. [sent-11, score-0.829]

4 We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. [sent-12, score-0.714]

5 Our model combines these inferred features with known covariates in order to perform link prediction. [sent-13, score-0.436]

6 1 Introduction Statistical analysis of social networks and other relational data has been an active area of research for over seventy years and is becoming an increasingly important problem as the scope and availability of social network datasets increase [1]. [sent-15, score-0.606]

7 In these problems, we observe the interactions between a set of entities and we wish to extract informative representations that are useful for making predictions about the entities and their relationships. [sent-16, score-0.397]

8 One basic challenge is link prediction, where we observe the relationships (or “links”) between some pairs of entities in a network (or “graph”) and we try to predict unobserved links. [sent-17, score-0.414]

9 For example, in a social network, we might only know some subset of people are friends and some are not, and seek to predict which other people are likely to get along. [sent-18, score-0.275]

10 Our goal is to improve the expressiveness and performance of generative models based on extracting latent structure representing the properties of individual entities from the observed data, so we will focus on these kinds of models. [sent-19, score-0.451]

11 In the most basic form of the model, we assume there are a finite number of classes that entities can belong to and that these classes entirely determine the structure of the graph, with the probability of a link existing between two entities depending only on the classes of those entities. [sent-24, score-0.643]

12 In general, these classes are unobserved, and inference reduces to assigning entities to classes and inferring the class interactions. [sent-25, score-0.305]

13 One of the important issues that arise in working with this model is determining how many latent classes there are for a given problem. [sent-26, score-0.313]

14 The Infinite Hidden Relational Model [7] further elaborated on this model and the Mixed Membership Stochastic Blockmodel (MMSB) [8] extended it to allow entities to have mixed memberships. [sent-28, score-0.245]

15 All these class-based models share a basic limitation in the kinds of relational structure they naturally capture. [sent-29, score-0.334]

16 In addition, if someone is both an athlete and a musician, we would either have to add another class for that or use a mixed membership model, which would say that the more a student is an athlete, the less he is a musician. [sent-34, score-0.267]

17 There could be a separate feature for “high school student,” “male,” “athlete,” and “musician” and the presence or absence of each of these features is what defines each person and determines their relationships. [sent-36, score-0.313]

18 However, extending our high school student example, we might hope that instead of having arbitrary realvalued features (which are still useful for visualization), we would infer binary features where each feature could correspond to an attribute like “male” or “athlete. [sent-38, score-0.671]

19 In this paper, we present the nonparametric latent feature relational model, a Bayesian nonparametric model in which each entity has binary-valued latent features that influences its relations. [sent-43, score-1.485]

20 This model allows us to simultaneously infer how many latent features there are while at the same time inferring what features each entity has and how those features influence the observations. [sent-45, score-1.087]

21 In Section 4, we illustrate the properties of our model using synthetic data and then show that the greater expressiveness of the latent feature representation results in improved link prediction on three real datasets. [sent-49, score-0.595]

22 2 The nonparametric latent feature relational model Assume we observe the directed relational links between a set of N entities. [sent-51, score-1.108]

23 That is, let yij ≡ Y (i, j) = 1 if we observe a link from entity i to entity j in that relation and yij = 0 if we observe that there is not a link. [sent-53, score-0.89]

24 1 Basic model In our basic model, each entity is described by a set of binary features. [sent-57, score-0.371]

25 We are not given these features a priori and will attempt to infer them. [sent-58, score-0.226]

26 We assume that the probability of having a link from one entity to another is entirely determined by the combined effect of all pairwise feature interactions. [sent-59, score-0.47]

27 If there are K features, then let Z be the N × K binary matrix where each row corresponds to an entity and each column corresponds to a feature such that zik ≡ Z(i, k) = 1 if the ith entity has feature k and zik = 0 otherwise. [sent-60, score-1.067]

28 and let Zi denote the feature vector corresponding to entity i. [sent-61, score-0.355]

29 Let W be a K × K real-valued weight matrix where wkk ≡ W (k, k ) is the weight that affects the probability of there being a link from entity i to entity j if both entity i has feature k and entity j has feature k . [sent-62, score-1.5]

30 We assume that links are independent conditioned on Z and W , and that only the features of entities i and j influence the probability of a link between those entities. [sent-63, score-0.496]

31 However, since entities can have more than a single feature, the model is more expressive. [sent-71, score-0.206]

32 In the high school student example, each feature can correspond to an attribute like “male,” “musician,” and “athlete. [sent-72, score-0.24]

33 ), then the weight at the (athlete, musician) entry of W would correspond to the weight that an athlete would be a friend of a musician. [sent-74, score-0.314]

34 A positive weight would correspond to an increased probability, a negative weight a decreased probability, and a zero weight would indicate that there is no correlation between those two features and the observed relation. [sent-75, score-0.291]

35 The more positively correlated features people have, the more likely they are to be friends. [sent-76, score-0.239]

36 While other features such as “athlete” or “musician” might indicate that one person could be a friend of another, the geographic features could have extremely negative weights so that people who live far from each other are less likely to be friends. [sent-78, score-0.516]

37 Given the full set of observations Y , we wish to infer the posterior distribution of the feature matrix Z and the weights W . [sent-81, score-0.256]

38 Without any prior knowledge about the 2 features or their weights, a natural prior for W involves placing an independent N (0, σw ) prior on each wij . [sent-83, score-0.255]

39 However, we wish to have a flexible prior that allows us to simultaneously infer the number of features at the same time we infer all the entries in Z. [sent-86, score-0.377]

40 2 The Indian Buffet Process and the basic generative model As mentioned in the previous section, any features which are all-zero do not affect the likelihood. [sent-89, score-0.242]

41 The Indian Buffet Process (IBP) [12] is a prior on infinite binary matrices such that with probability one, a feature matrix drawn from it for a finite number of entities will only have a finite number of non-zero features. [sent-91, score-0.401]

42 Moreover, any feature matrix, no matter how many non-zero features 3 it contains, has positive probability under the IBP prior. [sent-92, score-0.272]

43 It is therefore a useful nonparametric prior to place on our latent feature matrix Z. [sent-93, score-0.509]

44 The ith customer tries each previously sampled dish with probability proportional to the number of people that have already tried the dish and then samples a Poisson(α/i) number of new dishes. [sent-99, score-0.284]

45 Using an IBP prior on Z, our basic generative latent feature relational model is: Z ∼ IBP(α) 2 wkk ∼ N (0, σw ) yij ∼ σ Zi W Zj 2. [sent-102, score-0.914]

46 Full nonparametric latent feature relational model We have described the basic nonparametric latent feature relational model. [sent-104, score-1.531]

47 First, we note that there are many instances of logit models used in statistical network analysis that make use of covariates in link prediction [2]. [sent-106, score-0.27]

48 Let Xij be a vector that influences the relation yij , let Xp,i be a vector of known attributes of entity i when it is the parent of a link, and let Xc,i be a vector of known attributes of entity i when it is a child of a link. [sent-108, score-0.749]

49 In this model, c is a global offset that affects the default likelihood of a relation and ai and bj are entity and role specific offsets. [sent-113, score-0.346]

50 We still believe that each entity has a single set of features that determines all its relations, but these features will not affect each relation in the same way. [sent-117, score-0.627]

51 We will use the same features for each relation, but we will use an independent weight matrix W i for each relation Y i . [sent-122, score-0.313]

52 , Y m |Z, {W i , X i , β i , ai , bi , ci }m ) i=1 Pr(Y i |Z, W i , X i , β i , ai , bi , ci ). [sent-128, score-0.212]

53 4 Variations of the nonparametric latent feature relational model The model that we have defined is for directed graphs in which the matrix Y i is not assumed to be symmetric. [sent-130, score-0.845]

54 5 Related nonparametric latent feature models There are two models related to our nonparametric latent feature relational model that both use the IBP as a prior on binary latent feature matrices. [sent-135, score-1.612]

55 If Y is an N × N matrix where we assume the rows and columns have the same features (i. [sent-139, score-0.239]

56 This model does not allow for arbitrary feature interactions nor does it allow for negative feature correlations. [sent-145, score-0.295]

57 3 Inference Exact inference in our nonparametric latent feature relational model is intractable [12]. [sent-146, score-0.763]

58 Therefore, when resampling zik for non-zero columns k, if mk is the number of non-zero entries in column k excluding row i, then Pr(zik = 1|Z−ik , W, Y ) ∝ mk Pr(Y |zik = 1, Z−ik , W ). [sent-154, score-0.223]

59 We must also sample zik for each of the infinitely many all-zero columns to add features to the representation. [sent-155, score-0.33]

60 Here, we use the fact that in the IBP, the prior distribution on the number of new features for the last customer is Poisson(α/N ). [sent-156, score-0.261]

61 kmax new features for some maximum number of new features kmax and sampling the number of new features from this normalized distribution. [sent-161, score-0.477]

62 Given Z, resample W We sequentially resample each of the weights in W that correspond to non-zero features and drop all weights that correspond to all-zero features. [sent-166, score-0.253]

63 In (a), we show features that could be explained by a latent-class model that then produces the observation matrix in (b). [sent-173, score-0.241]

64 In (c), we show the feature matrix of our other synthetic dataset along with the corresponding observations in (d). [sent-175, score-0.239]

65 1 is that the stochastic blockmodel is a special case of our model in which each entity only has a single feature. [sent-187, score-0.408]

66 Stochastic blockmodels have been shown to perform well for statistical network analysis, so they seem like a reasonable way to initialize the feature matrix. [sent-188, score-0.221]

67 These datasets were simple enough that the basic model could attain 100% accuracy on held-out data, but were different enough to address the qualitative characteristics of the latent features inferred. [sent-196, score-0.504]

68 In one dataset, the features were the class-based features seen in Figure 1(a) and in the other, we used the features in Figure 1(c). [sent-197, score-0.477]

69 With the very simple, class-based model, 50% of the sampled feature matrices were identical to the generating feature matrix with another 25% differing by a single bit. [sent-200, score-0.269]

70 Though this matrix is different from the true generating features, with the appropriate weight matrix it predicts just as well as the true feature matrix. [sent-204, score-0.243]

71 These tests show that while our latent feature approach is able to learn features that explain the data well, due to subtle interactions between sets of features and weights, the features themselves will not in general correspond to interpretable features. [sent-205, score-0.844]

72 However, we can expect the inferred features to do a good job explaining the data. [sent-206, score-0.207]

73 These include a dataset containing 54 relations of 14 countries (such as “exports to” and “protests”) along with 90 given features of the countries [19] and a dataset containing 26 kinship relationships of 104 people in the Alyawarra tribe in Central Australia [20]. [sent-210, score-0.89]

74 Our goal in applying the latent feature relational model to these datasets was to demonstrate the effectiveness of our algorithm when compared to two established class-based algorithms, the IRM and the MMSB, and to demonstrate the effectiveness of our full algorithm. [sent-212, score-0.737]

75 For the countries dataset, Xp = Xc was the set of known features of the countries and X was the country distance similarity matrix described in Section 2. [sent-214, score-0.68]

76 As mentioned in the synthetic data section, the inferred features do not necessarily have any interpretable meaning, so we restrict ourselves to a quantitative comparison. [sent-216, score-0.251]

77 We report results for inferring a global set of features for all relations as described in Section 2. [sent-218, score-0.28]

78 3 which we refer to as “global” as well as results when a different set of features is independently learned for each relation and then the AUCs of all relations are averaged together, which we refer to as “single. [sent-219, score-0.309]

79 ” In addition, we tried initializing our sampler for the latent feature relational model with either a random feature matrix (“LFRM rand”) or class-based features from the IRM (“LFRM w/ IRM”). [sent-220, score-1.058]

80 Table 1: AUC on the countries and kinship datasets. [sent-223, score-0.271]

81 However, the random initialization does poorly at inferring the global features due to the coupling of features and the weights for each of the relations. [sent-260, score-0.406]

82 To demonstrate that the covariates are helping, but that even without them, our model does well, we ran the global LFRM with class-based initialization without covariates on the countries dataset and the AUC dropped to 0. [sent-262, score-0.497]

83 On the countries data, the latent feature model inferred on average 5-7 features when seeded with the IRM and 8-9 with a random initialization. [sent-265, score-0.862]

84 On the kinship data, it inferred 9-11 features when seeded with the IRM and 13-19 when seeded randomly. [sent-266, score-0.379]

85 3 Predicting NIPS coauthorship As our final example, highlighting the expressiveness of the latent feature relational model, we used the coauthorship data from the NIPS dataset compiled in [22]. [sent-271, score-1.004]

86 We took the 234 authors who had published with the most other people and looked at their coauthorship information. [sent-273, score-0.219]

87 We again learned models for the latent feature relational model, the IRM and the MMSB training on 80% of the data and using the remaining 20% as a test set. [sent-275, score-0.627]

88 For the latent feature model, since the coauthorship relationship is symmetric, we learned a full, symmetric weight matrix W as described in Section 2. [sent-276, score-0.563]

89 The latent feature relational model is the most expressive of the models and is able to much more faithfully reproduce the coauthorship network. [sent-286, score-0.852]

90 The latent feature relational model also quantitatively outperformed the IRM and MMSB. [sent-287, score-0.666]

91 We again ran our sampler for 1000 samples initializing with either a random feature matrix or a class-based feature matrix from the IRM and reported the AUC on the held-out data. [sent-288, score-0.389]

92 On average, the latent feature relational model inferred 20-22 features when initialized with the IRM and 38-44 features when initialized randomly. [sent-295, score-1.032]

93 5 Conclusion We have introduced the nonparametric latent feature relational model, an expressive nonparametric model for inferring latent binary features in relational entities. [sent-296, score-1.664]

94 Existing class-based approaches infer latent structure that is a special case of what can be inferred by this model. [sent-298, score-0.339]

95 We showed empirically that the nonparametric latent feature model performs well at link prediction on several different datasets, including datasets that were originally used to argue for class-based approaches. [sent-300, score-0.626]

96 Learning systems of concepts with an infinite relational model. [sent-329, score-0.29]

97 Multiplicative latent factor models for description and prediction of social networks. [sent-358, score-0.339]

98 Infinite latent feature models and the Indian Buffet Process. [sent-362, score-0.337]

99 Latent features in similarity judgment: A nonparametric Bayesian approach. [sent-378, score-0.256]

100 A choice model with infinitely many latent ou a features. [sent-393, score-0.263]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('irm', 0.435), ('relational', 0.29), ('entity', 0.242), ('latent', 0.224), ('countries', 0.219), ('ibp', 0.181), ('lfrm', 0.179), ('entities', 0.167), ('features', 0.159), ('mmsb', 0.159), ('coauthorship', 0.139), ('zik', 0.134), ('athlete', 0.119), ('social', 0.115), ('link', 0.115), ('feature', 0.113), ('yij', 0.112), ('musician', 0.099), ('nonparametric', 0.097), ('male', 0.09), ('alyawarra', 0.087), ('relations', 0.083), ('people', 0.08), ('blockmodel', 0.08), ('sampler', 0.077), ('covariates', 0.075), ('customer', 0.07), ('friend', 0.07), ('pr', 0.069), ('dish', 0.067), ('infer', 0.067), ('relation', 0.067), ('auc', 0.066), ('expressiveness', 0.06), ('blockmodels', 0.06), ('seeded', 0.06), ('wkk', 0.06), ('buffet', 0.057), ('links', 0.055), ('membership', 0.055), ('student', 0.054), ('kinship', 0.052), ('entries', 0.052), ('indian', 0.051), ('classes', 0.05), ('initialization', 0.05), ('inferred', 0.048), ('network', 0.048), ('geographic', 0.048), ('probit', 0.048), ('resample', 0.047), ('expressive', 0.047), ('stochastic', 0.047), ('binary', 0.046), ('rand', 0.045), ('synthetic', 0.044), ('weight', 0.044), ('basic', 0.044), ('attributes', 0.043), ('matrix', 0.043), ('metaphor', 0.042), ('dishes', 0.042), ('xij', 0.042), ('school', 0.041), ('country', 0.04), ('unobserved', 0.04), ('thomas', 0.04), ('athletes', 0.04), ('bmf', 0.04), ('enter', 0.04), ('nations', 0.04), ('philippa', 0.04), ('model', 0.039), ('zi', 0.039), ('dataset', 0.039), ('mixed', 0.039), ('datasets', 0.038), ('inferring', 0.038), ('entry', 0.037), ('columns', 0.037), ('ai', 0.037), ('association', 0.037), ('ik', 0.036), ('zj', 0.036), ('american', 0.036), ('ci', 0.035), ('bi', 0.034), ('interact', 0.034), ('predictions', 0.033), ('full', 0.033), ('prior', 0.032), ('attribute', 0.032), ('logit', 0.032), ('stanley', 0.032), ('culinary', 0.032), ('gibbs', 0.031), ('tom', 0.031), ('nite', 0.03), ('interactions', 0.03), ('grif', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999905 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

Author: Kurt Miller, Michael I. Jordan, Thomas L. Griffiths

Abstract: As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets. 1

2 0.22811437 155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization

Author: Ilya Sutskever, Joshua B. Tenenbaum, Ruslan Salakhutdinov

Abstract: We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us “understand” a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: cluster-based models yield more easily interpretable representations, while factorization-based approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data.

3 0.15945219 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

Author: Finale Doshi-velez, Shakir Mohamed, Zoubin Ghahramani, David A. Knowles

Abstract: Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models, scales to datasets orders of magnitude larger than have previously been possible. 1

4 0.14719763 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

5 0.1366073 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

Author: Francois Caron, Arnaud Doucet

Abstract: Over recent years Dirichlet processes and the associated Chinese restaurant process (CRP) have found many applications in clustering while the Indian buffet process (IBP) is increasingly used to describe latent feature models. These models are attractive because they ensure exchangeability (over samples). We propose here extensions of these models where the dependency between samples is given by a known decomposable graph. These models have appealing properties and can be easily learned using Monte Carlo techniques. 1 Motivation The CRP and IBP have found numerous applications in machine learning over recent years [5, 10]. We consider here the case where the data we are interested in are ‘locally’ dependent; these dependencies being represented by a known graph G where each data point/object is associated to a vertex. These local dependencies can correspond to any conceptual or real (e.g. space, time) metric. For example, in the context of clustering, we might want to propose a prior distribution on partitions enforcing that data which are ‘close’ in the graph are more likely to be in the same cluster. Similarly, in the context of latent feature models, we might be interested in a prior distribution on features enforcing that data which are ‘close’ in the graph are more likely to possess similar features. The ‘standard’ CRP and IBP correspond to the case where the graph G is complete; that is it is fully connected. In this paper, we generalize the CRP and IBP to decomposable graphs. The resulting generalized versions of the CRP and IBP enjoy attractive properties. Each clique of the graph follows marginally a CRP or an IBP process and explicit expressions for the joint prior distribution on the graph is available. It makes it easy to learn those models using straightforward generalizations of Markov chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC) algorithms proposed to perform inference for the CRP and IBP [5, 10, 14]. The rest of the paper is organized as follows. In Section 2, we review the popular Dirichlet multinomial allocation model and the Dirichlet Process (DP) partition distribution. We propose an extension of these two models to decomposable graphical models. In Section 3 we discuss nonparametric latent feature models, reviewing briefly the construction in [5] and extending it to decomposable graphs. We demonstrate these models in Section 4 on two applications: an alternative to the hierarchical DP model [12] and a time-varying matrix factorization problem. 2 Prior distributions for partitions on decomposable graphs Assume we have n observations. When performing clustering, we associate to each of this observation an allocation variable zi ∈ [K] = {1, . . . , K}. Let Πn be the partition of [n] = {1, . . . , n} defined by the equivalence relation i ↔ j ⇔ zi = zj . The resulting partition Πn = {A1 , . . . , An(Πn ) } 1 is an unordered collection of disjoint non-empty subsets Aj of [n], j = 1, . . . , n(Πn ), where ∪j Aj = [n] and n(Πn ) is the number of subsets for partition Πn . We also denote by Pn be the set of all partitions of [n] and let nj , j = 1, . . . , n(Πn ), be the size of the subset Aj . Each allocation variable zi is associated to a vertex/site of an undirected graph G, which is assumed to be known. In the standard case where the graph G is complete, we first review briefly here two popular prior distributions on z1:n , equivalently on Πn . We then extend these models to undirected decomposable graphs; see [2, 8] for an introduction to decomposable graphs. Finally we briefly discuss the directed case. Note that the models proposed here are completely different from the hyper multinomial-Dirichlet in [2] and its recent DP extension [6]. 2.1 Dirichlet multinomial allocation model and DP partition distribution Assume for the time being that K is finite. When the graph is complete, a popular choice for the allocation variables is to consider a Dirichlet multinomial allocation model [11] θ θ , . . . , ), zi |π ∼ π (1) K K where D is the standard Dirichlet distribution and θ > 0. Integrating out π, we obtain the following Dirichlet multinomial prior distribution π ∼ D( Pr(z1:n ) = K j=1 Γ(θ) Γ(nj + θ K) (2) θ Γ(θ + n)Γ( K )K and then, using the straightforward equality Pr(Πn ) = PK where PK = {Πn ∈ Pn |n(Πn ) ≤ K}, we obtain K! (K−n(Πn ))! Pr(z1:n ) valid for for all Πn ∈ n(Π ) Pr(Πn ) = θ Γ(θ) j=1n Γ(nj + K ) K! . θ (K − n(Πn ))! Γ(θ + n)Γ( K )n(Πn ) (3) DP may be seen as a generalization of the Dirichlet multinomial model when the number of components K → ∞; see for example [10]. In this case the distribution over the partition Πn of [n] is given by [11] n(Π ) θn(Πn ) j=1n Γ(nj ) . (4) Pr(Πn ) = n i=1 (θ + i − 1) Let Π−k = {A1,−k , . . . , An(Π−k ),−k } be the partition induced by removing item k to Πn and nj,−k be the size of cluster j for j = 1, . . . , n(Π−k ). It follows from (4) that an item k is assigned to an existing cluster j, j = 1, . . . , n(Π−k ), with probability proportional to nj,−k / (n − 1 + θ) and forms a new cluster with probability θ/ (n − 1 + θ). This property is the basis of the CRP. We now extend the Dirichlet multinomial allocation and the DP partition distribution models to decomposable graphs. 2.2 Markov combination of Dirichlet multinomial and DP partition distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. It can be easily checked that if the marginal distribution of zC for each clique C ∈ C is defined by (2) then these distributions are consistent as they yield the same distribution (2) over the separators. Therefore, the unique Markov distribution over G with Dirichlet multinomial distribution over the cliques is defined by [8] Pr(zC ) S∈S Pr(zS ) C∈C Pr(z1:n ) = (5) where for each complete set B ⊆ G, we have Pr(zB ) given by (2). It follows that we have for any Πn ∈ PK Γ(θ) K! Pr(Πn ) = (K − n(Πn ))! C∈C Γ(θ) S∈S 2 K j=1 θ Γ(nj,C + K ) θ Γ(θ+nC )Γ( K )K K j=1 θ Γ(nj,S + K ) θ Γ(θ+nS )Γ( K )K (6) where for each complete set B ⊆ G, nj,B is the number of items associated to cluster j, j = 1, . . . , K in B and nB is the total number of items in B. Within each complete set B, the allocation variables define a partition distributed according to the Dirichlet-multinomial distribution. We now extend this approach to DP partition distributions; that is we derive a joint distribution over Πn such that the distribution of ΠB over each complete set B of the graph is given by (4) with θ > 0. Such a distribution satisfies the consistency condition over the separators as the restriction of any partition distributed according to (4) still follows (4) [7]. G Proposition. Let Pn be the set of partitions Πn ∈ Pn such that for each decomposition A, B, and any (i, j) ∈ A × B, i ↔ j ⇒ ∃k ∈ A ∩ B such that k ↔ i ↔ j. As K → ∞, the prior distribution G over partitions (6) is given for each Πn ∈ Pn by Pr(Πn ) = θn(Πn ) n(ΠC ) Γ(nj,C ) j=1 nC i=1 (θ+i−1) n(ΠS ) Γ(nj,S ) j=1 nS (θ+i−1) i=1 C∈C S∈S (7) where n(ΠB ) is the number of clusters in the complete set B. Proof. From (6), we have θ n(ΠC ) K(K − 1) . . . (K − n(Πn ) + 1) Pr(Πn ) = K C∈C n(ΠC )− S∈S n(ΠS ) C∈C θ n(ΠS ) S∈S n(ΠC ) θ Γ(nj,C + K ) j=1 nC (θ+i−1) i=1 n(ΠS ) θ Γ(nj,S + K ) j=1 nS (θ+i−1) i=1 Thus when K → ∞, we obtain (7) if n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) and 0 otherwise. We have n(Πn ) ≤ C∈C n(ΠC ) − S∈S n(ΠS ) for any Πn ∈ Pn and the subset of Pn verifying G n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) corresponds to the set Pn . Example. Let the notation i ∼ j (resp. i j) indicates an edge (resp. no edge) between two sites. Let n = 3 and G be the decomposable graph defined by the relations 1 ∼ 2, 2 ∼ 3 and 1 3. G The set P3 is then equal to {{{1, 2, 3}}; {{1, 2}, {3}}; {{1}, {2, 3}}; {{1}, {2}, {3}}}. Note that G the partition {{1, 3}, {2}} does not belong to P3 . Indeed, as there is no edge between 1 and 3, they cannot be in the same cluster if 2 is in another cluster. The cliques are C1 = {1, 2} and C2 = {2, 3} Pr(ΠC1 ) Pr(ΠC2 ) hence we can and the separator is S2 = {2}. The distribution is given by Pr(Π3 ) = Pr(ΠS ) 2 check that we obtain Pr({1, 2, 3}) = (θ + 1)−2 , Pr({1, 2}, {3}) = Pr({1, 2}, {3}) = θ(θ + 1)−2 and Pr({1}, {2}, {3}) = θ2 (θ + 1)−2 . Let now define the full conditional distributions. Based on (7) the conditional assignment of an item k is proportional to the conditional over the cliques divided by the conditional over the separators. G Let denote G−k the undirected graph obtained by removing vertex k from G. Suppose that Πn ∈ Pn . G−k If Π−k ∈ Pn−1 , then do not change the value of item k. Otherwise, item k is assigned to cluster j / where j = 1, . . . , n(Π−k ) with probability proportional to {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (8) and to a new cluster with probability proportional to θ, where n−k,j,C is the number of items in the set C \ {k} belonging to cluster j. The updating process is illustrated by the Chinese wedding party process1 in Fig. 1. The results of this section can be extended to the Pitman-Yor process, and more generally to species sampling models. Example (continuing). Given Π−2 = {A1 = {1}, A2 = {3}}, we have −1 Pr( item 2 assigned to A1 = {1}| Π−2 ) = Pr( item 2 assigned to A2 = {3}| Π−2 ) = (θ + 2) −1 and Pr( item 2 assigned to new cluster A3 | Π−2 ) = θ (θ + 2) . Given Π−2 = {A1 = {1, 3}}, item 2 is assigned to A1 with probability 1. 1 Note that this representation describes the full conditionals while the CRP represents the sequential updat- ing. 3 (a) (b) (d) (c) (e) Figure 1: Chinese wedding party. Consider a group of n guests attending a wedding party. Each of the n guests may belong to one or several cliques, i.e. maximal groups of people such that everybody knows everybody. The belonging of each guest to the different cliques is represented by color patches on the figures, and the graphical representation of the relationship between the guests is represented by the graphical model (e). (a) Suppose that the guests are already seated such that two guests cannot be together at the same table is they are not part of the same clique, or if there does not exist a group of other guests such that they are related (“Any friend of yours is a friend of mine”). (b) The guest number k leaves his table and either (c) joins a table where there are guests from the same clique as him, with probability proportional to the product of the number of guests from each clique over the product of the number of guests belonging to several cliques on that table or (d) he joins a new table with probability proportional to θ. 2.3 Monte Carlo inference 2.3.1 MCMC algorithm Using the full conditionals, a single site Gibbs sampler can easily be designed to approximate the posterior distribution Pr(Πn |z1:n ). Given a partition Πn , an item k is taken out of the partition. If G−k Π−k ∈ Pn−1 , item k keeps the same value. Otherwise, the item will be assigned to a cluster j, / j = 1, . . . , n(Π−k ), with probability proportional to p(z{k}∪Aj,−k ) × p(zAj,−k ) {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (9) and the item will be assigned to a new cluster with probability proportional to p(z{k} ) × θ. Similarly to [3], we can also define a procedure to sample from p(θ|n(Πn ) = k)). We assume that θ ∼ G(a, b) and use p auxiliary variables x1 , . . . , xp . The procedure is as follows. • For j = 1, . . . , p, sample xj |k, θ ∼ Beta(θ + nSj , nCj − nSj ) • Sample θ|k, x1:p ∼ G(a + k, b − j log xj ) 2.3.2 Sequential Monte Carlo We have so far only treated the case of an undirected decomposable graph G. We can formulate a sequential updating rule for the corresponding perfect directed version D of G. Indeed, let (a1 , . . . a|V | ) be a perfect ordering and pa(ak ) be the set of parents of ak which is by definition complete. Let Πk−1 = {A1,k−1 , . . . , An(Πk−1 ),k−1 } denote the partition of the first k−1 vertices a1:k−1 and let nj,pa(ak ) be the number of elements with value j in the set pa(ak ), j = 1, . . . , n(Πk−1 ). Then the vertex ak joins the set j with probability nj,pa(ak ) / θ + cluster with probability θ/ θ + q q nq,pa(ak ) and creates a new nq,pa(ak ) . One can then design a particle filter/SMC method in a similar fashion as [4]. Consider a set of (i) (i) (i) (i) N N particles Πk−1 with weights wk−1 ∝ Pr(Πk−1 , z1:k−1 ) ( i=1 wk−1 = 1) that approximate (i) the posterior distribution Pr(Πk−1 |z1:k−1 ). For each particle i, there are n(Πk−1 ) + 1 possible 4 (i,j) allocations for component ak . We denote Πk the partition obtained by associating component ak (i,j) to cluster j. The weight associated to Πk is given by  nj,pa(ak ) (i)  if j = 1, . . . , n(Πk−1 ) θ+ q nq,pa(ak ) (i,j) (i) p(z{ak }∪Aj,k−1 ) wk−1 = wk−1 × (10) (i) θ  θ+ n p(zAj,k−1 ) if j = n(Πk−1 ) + 1 q q,pa(ak ) (i,j) Then we can perform a deterministic resampling step by keeping the N particles Πk with highest (i,j) (i) (i) weights wk−1 . Let Πk be the resampled particles and wk the associated normalized weights. 3 Prior distributions for infinite binary matrices on decomposable graphs Assume we have n objects; each of these objects being associated to the vertex of a graph G. To K each object is associated a K-dimensional binary vector zn = (zn,1 , . . . , zn,K ) ∈ {0, 1} where zn,i = 1 if object n possesses feature i and zn,i = 0 otherwise. These vectors zt form a binary n × K matrix denoted Z1:n . We denote by ξ1:n the associated equivalence class of left-ordered matrices and let EK be the set of left-ordered matrices with at most K features. In the standard case where the graph G is complete, we review briefly here two popular prior distributions on Z1:n , equivalently on ξ1:n : the Beta-Bernoulli model and the IBP [5]. We then extend these models to undirected decomposable graphs. This can be used for example to define a time-varying IBP as illustrated in Section 4. 3.1 Beta-Bernoulli and IBP distributions The Beta-Bernoulli distribution over the allocation Z1:n is K Pr(Z1:n ) = α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) α K Γ(nj j=1 (11) where nj is the number of objects having feature j. It follows that Pr(ξ1:n ) = K K! 2n −1 h=0 α K Γ(nj α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) Kh ! j=1 (12) where Kh is the number of features possessing the history h (see [5] for details). The nonparametric model is obtained by taking the limit when K → ∞ Pr(ξ1:n ) = αK K+ + 2n −1 h=1 Kh ! exp(−αHn ) where K + is the total number of features and Hn = 3.2 (n − nj )!(nj − 1)! n! j=1 n 1 k=1 k . (13) The IBP follows from (13). Markov combination of Beta-Bernoulli and IBP distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. As in the Dirichlet-multinomial case, it is easily seen that if for each clique C ∈ C, the marginal distribution is defined by (11), then these distributions are consistent as they yield the same distribution (11) over the separators. Therefore, the unique Markov distribution over G with Beta-Bernoulli distribution over the cliques is defined by [8] Pr(ZC ) S∈S Pr(ZS ) C∈C Pr(Z1:n ) = (14) where Pr(ZB ) given by (11) for each complete set B ⊆ G. The prior over ξ1:n is thus given, for ξ1:n ∈ EK , by Pr(ξ1:n ) = K! 2n −1 h=0 Kh ! α K α Γ(nj,C + K )Γ(nC −nj,C +1) α Γ(nC +1+ K ) α α Γ(nj,S + K )Γ(nS −nj,S +1) K K α j=1 Γ(nS +1+ K ) K j=1 C∈C S∈S 5 (15) where for each complete set B ⊆ G, nj,B is the number of items having feature j, j = 1, . . . , K in the set B and nB is the whole set of objects in set B. Taking the limit when K → ∞, we obtain after a few calculations Pr(ξ1:n ) = α + K[n] exp [−α ( C HnC − 2n −1 h=1 Kh ! HnS )] × C∈C + KC (nC −nj,C )!(nj,C −1)! j=1 nC ! S∈S S + KS (nS −nj,S )!(nj,S −1)! j=1 nS ! + + + + if K[n] = C KC − S KS and 0 otherwise, where KB is the number of different features possessed by objects in B. G Let En be the subset of En such that for each decomposition A, B and any (u, v) ∈ A × B: {u and v possess feature j} ⇒ ∃k ∈ A ∩ B such that {k possesses feature j}. Let ξ−k be the left-ordered + matrix obtained by removing object k from ξn and K−k be the total number of different features in G−k + ξ−k . For each feature j = 1, . . . , K−k , if ξ−k ∈ En−1 then we have   b C∈C nj,C if i = 1 S∈C nj,S Pr(ξk,j = i) = (16)  b C∈C (nC −nj,C ) if i = 0 (nS −nj,S ) S∈C nS where b is the appropriate normalizing constant then the customer k tries Poisson α {S∈S|k∈S} nC {C∈C|k∈C} new dishes. We can easily generalize this construction to a directed version D of G using arguments similar to those presented in Section 2; see Section 4 for an application to time-varying matrix factorization. 4 4.1 Applications Sharing clusters among relative groups: An alternative to HDP Consider that we are given d groups with nj data yi,j in each group, i = 1, . . . , nj , j = 1, . . . , d. We consider latent cluster variables zi,j that define the partition of the data. We will use alternatively the notation θi,j = Uzi,j in the following. Hierarchical Dirichlet Process [12] (HDP) is a very popular model for sharing clusters among related groups. It is based on a hierarchy of DPs G0 ∼ DP (γ, H), Gj |G0 ∼ DP (α, G0 ) j = 1, . . . d θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj . Under conjugacy assumptions, G0 , Gj and U can be integrated out and we can approximate the marginal posterior of (zi,j ) given y = (yi,j ) with Gibbs sampling using the Chinese restaurant franchise to sample from the full conditional p(zi,j |z−{i,j} , y). Using the graph formulation defined in Section 2, we propose an alternative to HDP. Let θ0,1 , . . . , θ0,N be N auxiliary variables belonging to what we call group 0. We define each clique Cj (j = 1, . . . , d) to be composed of elements from group j and elements from group 0. This defines a decomposable graphical model whose separator is given by the elements of group 0. We can rewrite the model in a way quite similar to HDP G0 ∼ DP (α, H), θ0,i |G0 ∼ G0 i = 1, ..., N α α Gj |θ0,1 , . . . , θ0,N ∼ DP (α + N, α+N H + α+N θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj N i=1 δθ0,i ) j = 1, . . . d, N For any subset A and j = k ∈ {1, . . . , p} we have corr(Gj (A), Gk (A)) = α+N . Again, under conjugacy conditions, we can integrate out G0 , Gj and U and approximate the marginal posterior distribution over the partition using the Chinese wedding party process defined in Section 2. Note that for latent variables zi,j , j = 1, . . . , d, associated to data, this is the usual CRP update. As in HDP, multiple layers can be added to the model. Figures 2 (a) and (b) resp. give the graphical DP alternative to HDP and 2-layer HDP. 6 z0 root z0 root corpora docs z1 z2 z1 z2 z3 z1,1 z1,2 z2,1 z2,2 z2,3 docs (a) Graphical DP alternative to HDP (b) Graphical DP alternative to 2-layer HDP Figure 2: Hierarchical Graphs of dependency with (a) one layer and (b) two layers of hierarchy. If N = 0, then Gj ∼ DP (α, H) for all j and this is equivalent to setting γ → ∞ in HDP. If N → ∞ then Gj = G0 for all j, G0 ∼ DP (α, H). This is equivalent to setting α → ∞ in the HDP. One interesting feature of the model is that, contrary to HDP, the marginal distribution of Gj at any layer of the tree is DP (α, H). As a consequence, the total number of clusters scales logarithmically (as in the usual DP) with the size of each group, whereas it scales doubly logarithmically in HDP. Contrary to HDP, there are at most N clusters shared between different groups. Our model is in that sense reminiscent of [9] where only a limited number of clusters can be shared. Note however that contrary to [9] we have a simple CRP-like process. The proposed methodology can be straightforwardly extended to the infinite HMM [12]. The main issue of the proposed model is the setting of the number N of auxiliary parameters. Another issue is that to achieve high correlation, we need a large number of auxiliary variables. Nonetheless, the computational time used to sample from auxiliary variables is negligible compared to the time used for latent variables associated to data. Moreover, it can be easily parallelized. The model proposed offers a far richer framework and ensures that at each level of the tree, the marginal distribution of the partition is given by a DP partition model. 4.2 Time-varying matrix factorization Let X1:n be an observed matrix of dimension n × D. We want to find a representation of this matrix in terms of two latent matrices Z1:n of dimension n × K and Y of dimension K × D. Here Z1:n 2 is a binary matrix whereas Y is a matrix of latent features. By assuming that Y ∼ N 0, σY IK×D and 2 X1:n = Z1:n Y + σX εn where εn ∼ N 0, σX In×D , we obtain p(X1:n |Z1:n ) ∝ −D/2 2 2 + Z+T Z+ + σX /σY IKn 1:n 1:n + (n−Kn )D σX exp − + Kn D σY 2 2 + where Σ−1 = I − Z+ Z+T Z+ + σX /σY IKn n 1:n 1:n 1:n −1 1 T −1 2 tr X1:n Σn X1:n 2σX (17) + Z+T , Kn the number of non-zero columns of 1:n + Z1:n and Z+ is the first Kn columns of Z1:n . To avoid having to set K, [5, 14] assume that Z1:n 1:n follows an IBP. The resulting posterior distribution p(Z1:n |X1:n ) can be estimated through MCMC [5] or SMC [14]. We consider here a different model where the object Xt is assumed to arrive at time index t and we want a prior distribution on Z1:n ensuring that objects close in time are more likely to possess similar features. To achieve this, we consider the simple directed graphical model D of Fig. 3 where the site numbering corresponds to a time index in that case and a perfect numbering of D is (1, 2, . . .). The set of parents pa(t) is composed of the r preceding sites {{t − r}, . . . , {t − 1}}. The time-varying IBP to sample from p(Z1:n ) associated to this directed graph follows from (16) and proceeds as follows. At time t = 1 + new new • Sample K1 ∼Poisson(α), set z1,i = 1 for i = 1, ..., K1 and set K1 = Knew . At times t = 2, . . . , r n + new ∼Poisson( α ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( 1:t−1,k ) and Kt t t 7   ?  ? - t−r - t−r+1 - . . . - t−1 - t - t+1        6 6 Figure 3: Directed graph. At times t = r + 1, . . . , n n + α new ∼Poisson( r+1 ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( t−r:t−1,k ) and Kt r+1 + Here Kt is the total number of features appearing from time max(1, t − r) to t − 1 and nt−r:t−1,k the restriction of n1:t−1 to the r last customers. Using (17) and the prior distribution of Z1:n which can be sampled using the time-varying IBP described above, we can easily design an SMC method to sample from p(Z1:n |X1:n ). We do not detail it here. Note that contrary to [14], our algorithm does not require inverting a matrix whose dimension grows linearly with the size of the data but only a matrix of dimension r × r. In order to illustrate the model and SMC algorithm, we create 200 6 × 6 images using a ground truth Y consisting of 4 different 6 × 6 latent images. The 200 × 4 binary matrix was generated from Pr(zt,k = 1) = πt,k , where πt = ( .6 .5 0 0 ) if t = 1, . . . , 30, πt = ( .4 .8 .4 0 ) if t = 31, . . . , 50 and πt = ( 0 .3 .6 .6 ) if t = 51, . . . , 200. The order of the model is set to r = 50. The feature occurences Z1:n and true features Y and their estimates are represented in Figure 4. Two spurious features are detected by the model (features 2 and 5 on Fig. 3(c)) but quickly discarded (Fig. 4(d)). The algorithm is able to correctly estimate the varying prior occurences of the features over time. Feature1 Feature2 Feature1 Feature2 Feature3 20 20 40 40 60 60 Feature4 80 100 Feature4 Feature5 Feature6 Time Feature3 Time 80 100 120 120 140 140 160 160 180 200 180 1 2 3 200 4 Feature (a) 1 2 3 4 5 6 Feature (b) (c) (d) Figure 4: (a) True features, (b) True features occurences, (c) MAP estimate ZM AP and (d) associated E[Y|ZM AP ] t=20 t=50 t=20 t=50 t=100 t=200 t=100 t=200 (a) (b) Figure 5: (a) E[Xt |πt , Y] and (b) E[Xt |X1:t−1 ] at t = 20, 50, 100, 200. 5 Related work and Discussion The fixed-lag version of the time-varying DP of Caron et al. [1] is a special case of the proposed model when G is given by Fig. 3. The bivariate DP of Walker and Muliere [13] is also a special case when G has only two cliques. In this paper, we have assumed that the structure of the graph was known beforehand and we have shown that many flexible models arise from this framework. It would be interesting in the future to investigate the case where the graphical structure is unknown and must be estimated from the data. Acknowledgment The authors thank the reviewers for their comments that helped to improve the writing of the paper. 8 References [1] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In Uncertainty in Artificial Intelligence, 2007. [2] A.P. Dawid and S.L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21:1272–1317, 1993. [3] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995. [4] P. Fearnhead. Particle filters for mixture models with an unknown number of components. Statistics and Computing, 14:11–21, 2004. [5] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 2006. [6] D. Heinz. Building hyper dirichlet processes for graphical models. Electonic Journal of Statistics, 3:290–315, 2009. [7] J.F.C. Kingman. Random partitions in population genetics. Proceedings of the Royal Society of London, 361:1–20, 1978. [8] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [9] P. M¨ ller, F. Quintana, and G. Rosner. A method for combining inference across related nonu parametric Bayesian models. Journal of the Royal Statistical Society B, 66:735–749, 2004. [10] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [11] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability theory and related fields, 102:145–158, 1995. [12] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581, 2006. [13] S. Walker and P. Muliere. A bivariate Dirichlet process. Statistics and Probability Letters, 64:1–7, 2003. [14] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems, 2007. 9

6 0.1332159 21 nips-2009-Abstraction and Relational learning

7 0.12964702 195 nips-2009-Probabilistic Relational PCA

8 0.11428154 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

9 0.11254437 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

10 0.11134294 89 nips-2009-FACTORIE: Probabilistic Programming via Imperatively Defined Factor Graphs

11 0.10871107 114 nips-2009-Indian Buffet Processes with Power-law Behavior

12 0.10633473 57 nips-2009-Conditional Random Fields with High-Order Features for Sequence Labeling

13 0.074015751 171 nips-2009-Nonparametric Bayesian Models for Unsupervised Event Coreference Resolution

14 0.065631777 3 nips-2009-AUC optimization and the two-sample problem

15 0.064018309 59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations

16 0.061575696 225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes

17 0.060962573 129 nips-2009-Learning a Small Mixture of Trees

18 0.059628699 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models

19 0.059325106 42 nips-2009-Bayesian Sparse Factor Models and DAGs Inference and Comparison

20 0.057838913 100 nips-2009-Gaussian process regression with Student-t likelihood


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.204), (1, -0.085), (2, -0.054), (3, -0.145), (4, 0.08), (5, -0.168), (6, 0.086), (7, -0.014), (8, -0.051), (9, 0.0), (10, 0.066), (11, -0.052), (12, -0.059), (13, -0.115), (14, -0.15), (15, 0.002), (16, 0.259), (17, 0.022), (18, 0.023), (19, 0.097), (20, -0.063), (21, 0.06), (22, 0.028), (23, 0.018), (24, 0.044), (25, 0.142), (26, -0.052), (27, -0.065), (28, 0.043), (29, 0.025), (30, 0.014), (31, -0.059), (32, 0.08), (33, -0.084), (34, 0.007), (35, -0.025), (36, -0.196), (37, 0.043), (38, 0.015), (39, -0.086), (40, -0.038), (41, 0.119), (42, 0.018), (43, -0.091), (44, -0.006), (45, 0.054), (46, -0.102), (47, -0.047), (48, -0.103), (49, -0.053)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93870139 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

Author: Kurt Miller, Michael I. Jordan, Thomas L. Griffiths

Abstract: As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets. 1

2 0.7409817 155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization

Author: Ilya Sutskever, Joshua B. Tenenbaum, Ruslan Salakhutdinov

Abstract: We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us “understand” a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: cluster-based models yield more easily interpretable representations, while factorization-based approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data.

3 0.72896504 195 nips-2009-Probabilistic Relational PCA

Author: Wu-jun Li, Dit-Yan Yeung, Zhihua Zhang

Abstract: One crucial assumption made by both principal component analysis (PCA) and probabilistic PCA (PPCA) is that the instances are independent and identically distributed (i.i.d.). However, this common i.i.d. assumption is unreasonable for relational data. In this paper, by explicitly modeling covariance between instances as derived from the relational information, we propose a novel probabilistic dimensionality reduction method, called probabilistic relational PCA (PRPCA), for relational data analysis. Although the i.i.d. assumption is no longer adopted in PRPCA, the learning algorithms for PRPCA can still be devised easily like those for PPCA which makes explicit use of the i.i.d. assumption. Experiments on realworld data sets show that PRPCA can effectively utilize the relational information to dramatically outperform PCA and achieve state-of-the-art performance. 1

4 0.69759923 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

5 0.64731956 171 nips-2009-Nonparametric Bayesian Models for Unsupervised Event Coreference Resolution

Author: Cosmin Bejan, Matthew Titsworth, Andrew Hickl, Sanda Harabagiu

Abstract: We present a sequence of unsupervised, nonparametric Bayesian models for clustering complex linguistic objects. In this approach, we consider a potentially infinite number of features and categorical outcomes. We evaluated these models for the task of within- and cross-document event coreference on two corpora. All the models we investigated show significant improvements when compared against an existing baseline for this task.

6 0.63853669 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

7 0.57121903 89 nips-2009-FACTORIE: Probabilistic Programming via Imperatively Defined Factor Graphs

8 0.53541809 114 nips-2009-Indian Buffet Processes with Power-law Behavior

9 0.53352386 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

10 0.53124428 21 nips-2009-Abstraction and Relational learning

11 0.48370919 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

12 0.47543514 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

13 0.40943933 42 nips-2009-Bayesian Sparse Factor Models and DAGs Inference and Comparison

14 0.40236863 226 nips-2009-Spatial Normalized Gamma Processes

15 0.3974027 49 nips-2009-Breaking Boundaries Between Induction Time and Diagnosis Time Active Information Acquisition

16 0.38127539 59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations

17 0.34689265 90 nips-2009-Factor Modeling for Advertisement Targeting

18 0.32987773 57 nips-2009-Conditional Random Fields with High-Order Features for Sequence Labeling

19 0.32393911 68 nips-2009-Dirichlet-Bernoulli Alignment: A Generative Model for Multi-Class Multi-Label Multi-Instance Corpora

20 0.3154707 3 nips-2009-AUC optimization and the two-sample problem


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(23, 0.077), (24, 0.049), (25, 0.074), (35, 0.062), (36, 0.113), (39, 0.063), (45, 0.127), (58, 0.071), (61, 0.012), (64, 0.012), (66, 0.046), (71, 0.069), (81, 0.027), (86, 0.067), (91, 0.031)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.87833834 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

Author: Kurt Miller, Michael I. Jordan, Thomas L. Griffiths

Abstract: As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets. 1

2 0.82549798 157 nips-2009-Multi-Label Prediction via Compressed Sensing

Author: John Langford, Tong Zhang, Daniel J. Hsu, Sham M. Kakade

Abstract: We consider multi-label prediction problems with large output spaces under the assumption of output sparsity – that the target (label) vectors have small support. We develop a general theory for a variant of the popular error correcting output code scheme, using ideas from compressed sensing for exploiting this sparsity. The method can be regarded as a simple reduction from multi-label regression problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this approach radically more efficient than others. We also state and prove robustness guarantees for this method in the form of regret transform bounds (in general), and also provide a more detailed analysis for the linear prediction setting. 1

3 0.82360309 61 nips-2009-Convex Relaxation of Mixture Regression with Efficient Algorithms

Author: Novi Quadrianto, John Lim, Dale Schuurmans, Tibério S. Caetano

Abstract: We develop a convex relaxation of maximum a posteriori estimation of a mixture of regression models. Although our relaxation involves a semidefinite matrix variable, we reformulate the problem to eliminate the need for general semidefinite programming. In particular, we provide two reformulations that admit fast algorithms. The first is a max-min spectral reformulation exploiting quasi-Newton descent. The second is a min-min reformulation consisting of fast alternating steps of closed-form updates. We evaluate the methods against Expectation-Maximization in a real problem of motion segmentation from video data. 1

4 0.80867791 187 nips-2009-Particle-based Variational Inference for Continuous Systems

Author: Andrew Frank, Padhraic Smyth, Alexander T. Ihler

Abstract: Since the development of loopy belief propagation, there has been considerable work on advancing the state of the art for approximate inference over distributions defined on discrete random variables. Improvements include guarantees of convergence, approximations that are provably more accurate, and bounds on the results of exact inference. However, extending these methods to continuous-valued systems has lagged behind. While several methods have been developed to use belief propagation on systems with continuous values, recent advances for discrete variables have not as yet been incorporated. In this context we extend a recently proposed particle-based belief propagation algorithm to provide a general framework for adapting discrete message-passing algorithms to inference in continuous systems. The resulting algorithms behave similarly to their purely discrete counterparts, extending the benefits of these more advanced inference techniques to the continuous domain. 1

5 0.80091804 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

Author: Lei Shi, Thomas L. Griffiths

Abstract: The goal of perception is to infer the hidden states in the hierarchical process by which sensory data are generated. Human behavior is consistent with the optimal statistical solution to this problem in many tasks, including cue combination and orientation detection. Understanding the neural mechanisms underlying this behavior is of particular importance, since probabilistic computations are notoriously challenging. Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus. This mechanism is based on a Monte Carlo method known as importance sampling, commonly used in computer science and statistics. Moreover, a simple extension to recursive importance sampling can be used to perform hierarchical Bayesian inference. We identify a scheme for implementing importance sampling with spiking neurons, and show that this scheme can account for human behavior in cue combination and the oblique effect. 1

6 0.78956795 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

7 0.78262496 155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization

8 0.78225094 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

9 0.77934092 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

10 0.77796775 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

11 0.77629566 129 nips-2009-Learning a Small Mixture of Trees

12 0.77353072 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

13 0.77343541 97 nips-2009-Free energy score space

14 0.77334923 132 nips-2009-Learning in Markov Random Fields using Tempered Transitions

15 0.77305168 260 nips-2009-Zero-shot Learning with Semantic Output Codes

16 0.77235556 169 nips-2009-Nonlinear Learning using Local Coordinate Coding

17 0.77197939 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models

18 0.77065086 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

19 0.76745498 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

20 0.76710349 226 nips-2009-Spatial Normalized Gamma Processes