nips nips2009 nips2009-29 knowledge-graph by maker-knowledge-mining

29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism


Source: pdf

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Here we extend the latent factor model framework to two or more unbounded layers of latent factors. [sent-5, score-0.447]

2 From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. [sent-6, score-0.367]

3 We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. [sent-7, score-0.259]

4 1 Introduction The Indian Buffet Process (IBP) [5] is a Bayesian nonparametric approach that models objects as arising from an unbounded number of latent features. [sent-8, score-0.241]

5 Consider music tag data collected through the internet service provider Last. [sent-12, score-0.205]

6 Users of the service label songs and artists with descriptive tags that collectively form a representation of an artist or song. [sent-14, score-0.213]

7 These tags can then be used to organize playlists around certain themes, such as music from the 80’s. [sent-15, score-0.169]

8 The top 8 tags for the popular band R ADIOHEAD are: alternative, rock, alternative rock, indie, electronic, britpop, british, and indie rock. [sent-16, score-0.169]

9 The tags point to various facets of the band, for example that they are based in Britain, that they make use of electronic music and that their style of music is alternative and/or rock. [sent-17, score-0.282]

10 Modeling such data with an IBP allows us to capture the latent factors that give rise to the tags, including inferring the number of factors characterizing the data. [sent-19, score-0.321]

11 However the IBP assumes these latent features are independent across object instances. [sent-20, score-0.22]

12 Despite there being a wealth of distinct factors that collectively describe an artist, it is clear that the co-occurrence of some features is more likely than others. [sent-22, score-0.218]

13 For example, factors associated with the tag alternative are more likely to co-occur with those associated with the tag indie than those associated with tag classical. [sent-23, score-0.567]

14 The main contribution of this work is to present a method for extending infinite latent factor models to two or more unbounded layers of factors, with upper-layer factors defining a factorial prior distribution over the binary factors of the layer below. [sent-24, score-0.665]

15 In this framework, the upper-layer factors express correlations between lower-layer factors via a noisy-or mechanism. [sent-25, score-0.204]

16 We show how the complete model is amenable to efficient inference via a Gibbs sampling procedure and compare performance of our hierarchical method with the standard IBP construction on both a digit modeling task, and a music genre-tagging task. [sent-29, score-0.296]

17 ∀k) and feature variables zn,1:K = [znk ]K which we take to be binary: k=1 znk ∈ {0, 1}. [sent-35, score-0.53]

18 We denote the presence of feature k in example n as znk = 1 and its absence as znk = 0. [sent-36, score-1.033]

19 Features present in an object are said to be active while absent features are inactive. [sent-37, score-0.246]

20 ∀n, k): α ,1 xn | zn,1:K , θ ∼ F (zn,1:K , θ) znk | µk ∼ Bernoulli(µk ) µk | α ∼ Beta K According to the standard development of the IBP, we can marginalize over variables µ1:K and take the limit K → ∞ to recover a distribution over an unbounded binary feature matrix Z. [sent-42, score-0.766]

21 As with the stick-breaking construction of the Dirichlet process (DP), the IBP stick-breaking construction provides a direct characterization of the random latent feature probabilities via an unbounded sequence. [sent-44, score-0.406]

22 Consider once again the finite latent factor model described above. [sent-45, score-0.201]

23 Letting K → ∞, Z now possesses an unbounded number of columns with a corresponding unbounded set of random probabilities [µ1 , µ2 , . [sent-46, score-0.246]

24 3 A Hierarchy of Latent Features Via a Noisy-OR Mechanism In this section we extend the infinite latent features framework to incorporate interactions between multiple layers of unbounded features. [sent-56, score-0.326]

25 We consider here the simplest hierarchical latent factor model consisting of two layers of binary latent features: an upper-layer binary latent feature matrix Y with elements ynj , and a lower-layer binary latent feature matrix Z with elements znk . [sent-58, score-1.906]

26 The probability distribution over the elements ynj is defined as previously in the limit construction of the IBP: ynj | µj ∼ Bernoulli(µj ), with µj | αµ ∼ Beta(αµ /J, 1). [sent-59, score-1.028]

27 The lower binary variables znk are also defined as Bernoulli distributed random quantities: znk | yn,: , V:,k ∼ Bernoulli(1 − j (1 − ynj Vjk )) indep. [sent-60, score-1.517]

28 (1) However, here the probability that znk = 1 is a function of the upper binary variables yn,: and the kth column of the weight matrix V , with probabilities Vjk ∈ [0, 1] connecting ynj to znk . [sent-62, score-1.592]

29 The crux of the model is how ynj interacts with znk via a noisy-or mechanism defined in Eq. [sent-63, score-1.073]

30 The binary ynj modulates the involvement of the Vjk terms in the product, which in turn modulates P (znk = 1 | yn,: , V:,k ). [sent-65, score-0.577]

31 The noisy-or mechanism interacts positively in the sense that changing an element ynj from inactive to active can only increase P (znk = 1 | yn: , V:k ), or leave it unchanged in the case where Vjk = 0. [sent-66, score-0.834]

32 We interpret the active yn,: to be possible causes of the activation of the individual znk , ∀k. [sent-67, score-0.646]

33 d Beta(αµ , 1), µj = Ql l k ∼ Beta(αν , 1), νk = Vjk ynj ∼ ∼ Beta(cνk , c(1 − νk ) + 1) Bern(µj ) znk ∼ Rl znk xn N Rl l Bern(1 − j (1 − ynj Vjk )). [sent-80, score-1.981]

34 Right: Summary of the hierarchical infinite noisy-or factor model in the stick-breaking parametrization. [sent-82, score-0.17]

35 Finally, we augment the model with an additional random matrix A with multinomial elements Ank , assigning each instance of znk = 1 to an index j corresponding to the active upper-layer unit ynj responsible for causing the event. [sent-90, score-1.205]

36 (1) as specifying, for each znk , an ordered series of binary trials (i. [sent-93, score-0.6]

37 For each znk , we proceed through the ordered set of elements, {Vjk , ynj }j=1,2,. [sent-96, score-0.998]

38 With probability yn,j ∗ Vj ∗ ,k , trial j ∗ is deemed a “success” and we set znk = 1, Ank = j ∗ , and no further trials are conducted for {n, k, j > j ∗ }. [sent-100, score-0.56]

39 Conversely, with probability (1 − ynj ∗ Vj ∗ k ) the trial is deemed a “failure” and we move on to trial j ∗ + 1. [sent-101, score-0.525]

40 Since all trials j associated with inactive upper-layer features are failures with probability one (because ynj = 0), we need only consider the trials for which ynj = 1. [sent-102, score-1.27]

41 If, for a given znk , all trials j for which ynj = 1 (active) are failures, then we set znk = 0 with probability one. [sent-103, score-1.504]

42 With the generative process specified as above, we can define the posterior distribution over the weights V given the assignment matrix A and the latent feature matrix Y . [sent-109, score-0.223]

43 the number of N times ynj caused the activation of znk ) and let Njk = n=1 ynj I(Ank > j), that is the number of times that the j-th trial was a failure for znk despite ynj being active. [sent-112, score-2.461]

44 Finally, let us also denote the N number of times y:,j is active: Nj = n=1 ynj . [sent-113, score-0.469]

45 Taking the limit of J → ∞ is relatively straightforward as the upper-layer factor model naturally tends to an IBP: Y ∼ IBP, and its involvement in the remainder of the model is limited to the set of active elements of Y , which remains finite for finite datasets. [sent-119, score-0.322]

46 In taking K → ∞, the distribution over the unbounded νk converges to that of the IBP, while the conditional distribution over the noisy-or weights Vjk remain simple beta distributions given the corresponding νk (as in Eq. [sent-120, score-0.234]

47 The algorithm is based jointly on the blocked Gibbs sampling strategy for truncated Dirichlet distributions [7] and on the IBP semi-ordered slice sampler [10], which we employ at each layer of the hierarchy. [sent-123, score-0.273]

48 Because both algorithms are based on the strategy of directly sampling an instantiation of the model parameters, their use together permits us to define an efficient extended blocked Gibbs sampler over the entire model without approximation. [sent-124, score-0.197]

49 To facilitate our description of the semi-ordered slice sampler, we separate µ1:∞ into two subsets: µ+ + and µo , where µ+ + are the probabilities associated with the set of J + active upper-layer 1:∞ 1:J 1:J + factors Y + (those that appear at least once in the dataset, i. [sent-125, score-0.38]

50 ∃i : yij = 1, 1 ≤ j ≤ J + ) and µo 1:∞ o are associated with the unbounded set of inactive features Y (those not appearing in the dataset). [sent-127, score-0.349]

51 + o Similarly, we separate ν1:∞ into ν1:K + and ν1:∞ , and Z into corresponding active Z + and inactive o + Z where K is the number of active lower-layer factors. [sent-128, score-0.46]

52 The uniformly distributed auxiliary slice variables, sy controls the truncation level of the upper-layer IBP, where µ∗ is defined as the smallest probability µ corresponding to an active feature: sy | Y, µ1:∞ ∼ Uniform(0, µ∗ ), µ∗ = min 1, min 1≤j ≤J + µ+ . [sent-132, score-0.544]

53 However, given sy , the conditional distribution p(ynj = 1 | Z, sy , µ1:∞ ) = 0 for all n, j such that µj < sy . [sent-134, score-0.47]

54 This is the crux of the slice sampling approach: Each sample sy adaptively truncates the model, with µ1:J > sy . [sent-135, score-0.471]

55 The above expression arises from using the IBP stick-breaking construction to J marginalize over the inactive elements of µ: [10]. [sent-139, score-0.263]

56 For each of the J o inactive features drawn, the 4 o o corresponding features y1:N,1:J o are initialized to zero and the corresponding weight V1:J o ,1:K are sampled from their prior in Eq. [sent-140, score-0.354]

57 (6)); yn,¬j represents all elements yn,1:J except ynj The conditional probability of the lower-layer binary variables is given by: p(znk | yn,: , V:,k ) = (1 − j (1 − ynj Vjk )). [sent-145, score-1.022]

58 Once again we separate Y and µ1:∞ into a set of active features: Y + with prob1:J abilities µ+ + ; and a set of inactive features Y o with µo . [sent-147, score-0.397]

59 The inactive set is discarded while the 1:∞ 1:J + active set of µ+ + are resampled from the posterior distribution: µ+ | y:,j ∼ Beta(Nj , 1+N −Nj ). [sent-148, score-0.339]

60 j 1:J At this point we also separate the lower-layer factors into an active set of K+ factors Z + with cor+ + responding ν1:K + , V1:J + ,1:K + and data likelihood parameters θ+ ; and a discarded inactive set. [sent-149, score-0.547]

61 2 Semi-ordered slice sampling of the lower-layer factor model Sampling the variables of the lower-layer IFM model proceeds analogously to the upper-layer IBP. [sent-151, score-0.235]

62 We proceed by making use of the marginal distribution over the assignment probabilities to define a second auxiliary slice variable, sz . [sent-154, score-0.214]

63 The auxiliary slice variable is sampled according to the following, where ν ∗ is defined as the smallest probability corresponding to an active feature: sz | Z, ν1:∞ ∼ Uniform(0, ν ∗ ), ν ∗ = min 1, min 1≤k ≤K + + νk . [sent-156, score-0.301]

64 Given sz and Y , the random probabilities over the inactive lower-layer binary o features, ν1:∞ , are sampled sequentially to draw a set of K o feature probabilities, until νK o +1 < sz . [sent-158, score-0.435]

65 The distribution over the ordered inactive features is log-concave in log νk , and is therefore amenable to efficient sample via adaptive rejection sampling (as was done in sampling µo o ). [sent-164, score-0.423]

66 Each of the K o inactive features are initialized to zero for every data object, Z o = 0, while 1:J the corresponding V o and likelihood parameters θo are drawn from their priors. [sent-165, score-0.28]

67 Then, conditional on this prior, the data X and parameters θ, we sample sequentially for each znk : p(znk 0 1 J+ Y 1 @ + + | yn,: , V:,k , zn,¬k , θ, ν ∗ ) = ∗ 1 − (1 − ynj Vjk )A f (xn | zn,: , θ), ν j=1 where f (xn | zn,: , θ) is the likelihood function for the nth data object. [sent-170, score-1.039]

68 In each case, hyperparameters are specified with respect to the IBP (using cross-validation by evaluating the likelihood of a holdout set) and held fixed for the hierarchical factor model. [sent-183, score-0.169]

69 To model MNIST digits, we augment both the IBP and the hierarchical model with a matrix G of the same size as Z and with i. [sent-188, score-0.159]

70 The inclusion of G introduces an additional step to our Gibbs sampling procedure, however the rest of the hierarchical infinity factor model is as described in Sect. [sent-193, score-0.215]

71 In order to assess the success of our hierarchical IFM in capturing higher-order factors present in the MNIST data, we consider a de-noising task. [sent-195, score-0.188]

72 One of the potential benefits of the style of model we propose here is that there is the opportunity for latent factors at one layer to share features at a lower layer. [sent-201, score-0.398]

73 2 (d)-(e) compare the features (sampled rows of the θ matrix) learned by both the IBP and by the hierarchical noisy-or factor model. [sent-205, score-0.223]

74 Interestingly, the sampled features learned in the hierarchical model appear to be slightly more spatially localized and sparse. [sent-206, score-0.193]

75 Interestingly, the IBP model infers a greater number of latent factors that did the 2-layer 6 5000 −2. [sent-209, score-0.246]

76 active features 40 1 2 5 600 500 6000 num. [sent-218, score-0.223]

77 (f) A comparison of the distributions of the number of active elements between the IBP and the noisy-or model. [sent-246, score-0.165]

78 (g) A comparison of the number of active (lower-layer) factors possessed by an object between the IBP and the hierarchical model. [sent-247, score-0.354]

79 (h) the distribution of upper-layer active factors and (i) the number of active factors found in an object. [sent-248, score-0.49]

80 However, the distribution over factors active for each data object is nearly identical. [sent-250, score-0.268]

81 This suggests the possibility that the IBP is maintaining specialized factors that possibly represent a superposition of frequently co-occurring factors that the noisy-or model has captured more compactly. [sent-251, score-0.231]

82 2 Experiment II: Music Tags Returning to our motivating example from the introduction, we extracted tags and tag frequencies from the social music website Last. [sent-253, score-0.309]

83 We will adopt a different approach to the standard Latent Dirichlet Allocation (LDA) document processing strategy of modeling the document – or in this case tag collection – as having been generated from a mixture of tag multinomials. [sent-257, score-0.28]

84 We wish to distinguish between an artist that everyone agrees is both country and rock versus an artist that people are divided whether they are rock or country. [sent-258, score-0.206]

85 t connecting Z to the data X via the distribution: Xnt ∼ Binomial(1 − k (1 − znk W ), C) where C is the limit on the number of possible counts achievable. [sent-265, score-0.525]

86 active features 160 0 300 200 500 50 0 80 400 100 0 2 4 6 num. [sent-273, score-0.223]

87 active features 4 Figure 3: The distribution of active features for the noisy-or model at the (a) lower-layer and (c) the upperlayer. [sent-276, score-0.473]

88 The distribution over active features per data object for the (b) upper-layer and (d) lower-layer. [sent-277, score-0.246]

89 The two most probable factors to emerge at the upper layer are associated with the tags (in order of probability): 1. [sent-285, score-0.278]

90 pop, rock, 80s, dance, 90s The ability of the 2-layer noisy-or model to capture higher-order structure in the tag data was again assessed though a comparison to the standard IBP using the noisy-or observation model above. [sent-287, score-0.194]

91 The model was also compared against a more standard latent factor model with the latent representation ηnk modeling the data through a generalized linear model: Xnt ∼ Binomial(Logistic(ηn,: O:,t ), C), where the function Logistic(. [sent-288, score-0.345]

92 001e05 Discussion We have defined a noisy-or mechanism that allows one infinite factor model to act as a prior for another infinite factor model. [sent-301, score-0.209]

93 The model permits high-order structure to be captured in a factor model framework while maintaining an efficient sampling algorithm. [sent-302, score-0.177]

94 The model presented here is similar in spirit to the hierarchical Beta process, [11] in the sense that both models define a hierarchy of unbounded latent factor models. [sent-303, score-0.422]

95 However, while the hierarchical Beta process can be seen as a way to group objects in the data-set with similar features, our model provides a way to group features that frequently co-occur in the data-set. [sent-304, score-0.241]

96 It is perhaps more similar in spirit to the work of [9] who also sought a means of associating latent factors in an IBP, however their work does not act directly on the unbounded binary factors as ours does. [sent-305, score-0.458]

97 Recently the question of how to define a hierarchical factor model to induce correlations between lower-layer factors was addressed by [3] with their IBPIBP model. [sent-306, score-0.272]

98 However, unlike our model, where the dependencies induced by the upper-layer factors via an noisy-or mechanism, the IBP-IBP model models correlations via an AND construct through the interaction of binary factors. [sent-307, score-0.171]

99 fm for making the tag data publicly available and Paul Lamere for his help in processing the tag data. [sent-310, score-0.28]

100 Infinite latent feature models and the indian buffet process. [sent-330, score-0.243]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('znk', 0.503), ('ynj', 0.469), ('ibp', 0.357), ('vjk', 0.255), ('inactive', 0.174), ('sy', 0.15), ('active', 0.143), ('tag', 0.14), ('ank', 0.133), ('beta', 0.119), ('latent', 0.117), ('tags', 0.104), ('factors', 0.102), ('unbounded', 0.095), ('yn', 0.094), ('hierarchical', 0.086), ('features', 0.08), ('slice', 0.079), ('layer', 0.072), ('nj', 0.067), ('music', 0.065), ('factor', 0.057), ('sz', 0.057), ('probabilities', 0.056), ('rock', 0.054), ('buffet', 0.052), ('artist', 0.049), ('mechanism', 0.048), ('gibbs', 0.048), ('indian', 0.047), ('construction', 0.046), ('blocked', 0.045), ('ifm', 0.045), ('indie', 0.045), ('mjk', 0.045), ('sampling', 0.045), ('binary', 0.042), ('hierarchy', 0.04), ('zn', 0.037), ('mcmc', 0.037), ('xn', 0.037), ('collectively', 0.036), ('njk', 0.036), ('layers', 0.034), ('sampler', 0.032), ('rejection', 0.032), ('dirichlet', 0.032), ('nite', 0.031), ('yoshua', 0.031), ('aistat', 0.03), ('eck', 0.03), ('hbp', 0.03), ('montr', 0.03), ('xnt', 0.03), ('yni', 0.03), ('trials', 0.029), ('objects', 0.029), ('trial', 0.028), ('zoubin', 0.027), ('mnist', 0.027), ('digit', 0.027), ('feature', 0.027), ('model', 0.027), ('electronic', 0.027), ('bern', 0.026), ('crux', 0.026), ('eleventh', 0.026), ('vik', 0.026), ('likelihood', 0.026), ('ordered', 0.026), ('binomial', 0.024), ('aaron', 0.024), ('ars', 0.024), ('artists', 0.024), ('involvement', 0.024), ('ql', 0.024), ('factorial', 0.024), ('object', 0.023), ('bernoulli', 0.023), ('douglas', 0.023), ('elements', 0.022), ('auxiliary', 0.022), ('posterior', 0.022), ('draw', 0.022), ('multinomial', 0.022), ('limit', 0.022), ('modulates', 0.021), ('marginalize', 0.021), ('cal', 0.021), ('sample', 0.021), ('permits', 0.021), ('facets', 0.021), ('conditional', 0.02), ('prior', 0.02), ('failure', 0.02), ('band', 0.02), ('failures', 0.02), ('digits', 0.019), ('matrix', 0.019), ('process', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

2 0.30329752 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

Author: Finale Doshi-velez, Shakir Mohamed, Zoubin Ghahramani, David A. Knowles

Abstract: Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models, scales to datasets orders of magnitude larger than have previously been possible. 1

3 0.14929006 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

Author: Francois Caron, Arnaud Doucet

Abstract: Over recent years Dirichlet processes and the associated Chinese restaurant process (CRP) have found many applications in clustering while the Indian buffet process (IBP) is increasingly used to describe latent feature models. These models are attractive because they ensure exchangeability (over samples). We propose here extensions of these models where the dependency between samples is given by a known decomposable graph. These models have appealing properties and can be easily learned using Monte Carlo techniques. 1 Motivation The CRP and IBP have found numerous applications in machine learning over recent years [5, 10]. We consider here the case where the data we are interested in are ‘locally’ dependent; these dependencies being represented by a known graph G where each data point/object is associated to a vertex. These local dependencies can correspond to any conceptual or real (e.g. space, time) metric. For example, in the context of clustering, we might want to propose a prior distribution on partitions enforcing that data which are ‘close’ in the graph are more likely to be in the same cluster. Similarly, in the context of latent feature models, we might be interested in a prior distribution on features enforcing that data which are ‘close’ in the graph are more likely to possess similar features. The ‘standard’ CRP and IBP correspond to the case where the graph G is complete; that is it is fully connected. In this paper, we generalize the CRP and IBP to decomposable graphs. The resulting generalized versions of the CRP and IBP enjoy attractive properties. Each clique of the graph follows marginally a CRP or an IBP process and explicit expressions for the joint prior distribution on the graph is available. It makes it easy to learn those models using straightforward generalizations of Markov chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC) algorithms proposed to perform inference for the CRP and IBP [5, 10, 14]. The rest of the paper is organized as follows. In Section 2, we review the popular Dirichlet multinomial allocation model and the Dirichlet Process (DP) partition distribution. We propose an extension of these two models to decomposable graphical models. In Section 3 we discuss nonparametric latent feature models, reviewing briefly the construction in [5] and extending it to decomposable graphs. We demonstrate these models in Section 4 on two applications: an alternative to the hierarchical DP model [12] and a time-varying matrix factorization problem. 2 Prior distributions for partitions on decomposable graphs Assume we have n observations. When performing clustering, we associate to each of this observation an allocation variable zi ∈ [K] = {1, . . . , K}. Let Πn be the partition of [n] = {1, . . . , n} defined by the equivalence relation i ↔ j ⇔ zi = zj . The resulting partition Πn = {A1 , . . . , An(Πn ) } 1 is an unordered collection of disjoint non-empty subsets Aj of [n], j = 1, . . . , n(Πn ), where ∪j Aj = [n] and n(Πn ) is the number of subsets for partition Πn . We also denote by Pn be the set of all partitions of [n] and let nj , j = 1, . . . , n(Πn ), be the size of the subset Aj . Each allocation variable zi is associated to a vertex/site of an undirected graph G, which is assumed to be known. In the standard case where the graph G is complete, we first review briefly here two popular prior distributions on z1:n , equivalently on Πn . We then extend these models to undirected decomposable graphs; see [2, 8] for an introduction to decomposable graphs. Finally we briefly discuss the directed case. Note that the models proposed here are completely different from the hyper multinomial-Dirichlet in [2] and its recent DP extension [6]. 2.1 Dirichlet multinomial allocation model and DP partition distribution Assume for the time being that K is finite. When the graph is complete, a popular choice for the allocation variables is to consider a Dirichlet multinomial allocation model [11] θ θ , . . . , ), zi |π ∼ π (1) K K where D is the standard Dirichlet distribution and θ > 0. Integrating out π, we obtain the following Dirichlet multinomial prior distribution π ∼ D( Pr(z1:n ) = K j=1 Γ(θ) Γ(nj + θ K) (2) θ Γ(θ + n)Γ( K )K and then, using the straightforward equality Pr(Πn ) = PK where PK = {Πn ∈ Pn |n(Πn ) ≤ K}, we obtain K! (K−n(Πn ))! Pr(z1:n ) valid for for all Πn ∈ n(Π ) Pr(Πn ) = θ Γ(θ) j=1n Γ(nj + K ) K! . θ (K − n(Πn ))! Γ(θ + n)Γ( K )n(Πn ) (3) DP may be seen as a generalization of the Dirichlet multinomial model when the number of components K → ∞; see for example [10]. In this case the distribution over the partition Πn of [n] is given by [11] n(Π ) θn(Πn ) j=1n Γ(nj ) . (4) Pr(Πn ) = n i=1 (θ + i − 1) Let Π−k = {A1,−k , . . . , An(Π−k ),−k } be the partition induced by removing item k to Πn and nj,−k be the size of cluster j for j = 1, . . . , n(Π−k ). It follows from (4) that an item k is assigned to an existing cluster j, j = 1, . . . , n(Π−k ), with probability proportional to nj,−k / (n − 1 + θ) and forms a new cluster with probability θ/ (n − 1 + θ). This property is the basis of the CRP. We now extend the Dirichlet multinomial allocation and the DP partition distribution models to decomposable graphs. 2.2 Markov combination of Dirichlet multinomial and DP partition distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. It can be easily checked that if the marginal distribution of zC for each clique C ∈ C is defined by (2) then these distributions are consistent as they yield the same distribution (2) over the separators. Therefore, the unique Markov distribution over G with Dirichlet multinomial distribution over the cliques is defined by [8] Pr(zC ) S∈S Pr(zS ) C∈C Pr(z1:n ) = (5) where for each complete set B ⊆ G, we have Pr(zB ) given by (2). It follows that we have for any Πn ∈ PK Γ(θ) K! Pr(Πn ) = (K − n(Πn ))! C∈C Γ(θ) S∈S 2 K j=1 θ Γ(nj,C + K ) θ Γ(θ+nC )Γ( K )K K j=1 θ Γ(nj,S + K ) θ Γ(θ+nS )Γ( K )K (6) where for each complete set B ⊆ G, nj,B is the number of items associated to cluster j, j = 1, . . . , K in B and nB is the total number of items in B. Within each complete set B, the allocation variables define a partition distributed according to the Dirichlet-multinomial distribution. We now extend this approach to DP partition distributions; that is we derive a joint distribution over Πn such that the distribution of ΠB over each complete set B of the graph is given by (4) with θ > 0. Such a distribution satisfies the consistency condition over the separators as the restriction of any partition distributed according to (4) still follows (4) [7]. G Proposition. Let Pn be the set of partitions Πn ∈ Pn such that for each decomposition A, B, and any (i, j) ∈ A × B, i ↔ j ⇒ ∃k ∈ A ∩ B such that k ↔ i ↔ j. As K → ∞, the prior distribution G over partitions (6) is given for each Πn ∈ Pn by Pr(Πn ) = θn(Πn ) n(ΠC ) Γ(nj,C ) j=1 nC i=1 (θ+i−1) n(ΠS ) Γ(nj,S ) j=1 nS (θ+i−1) i=1 C∈C S∈S (7) where n(ΠB ) is the number of clusters in the complete set B. Proof. From (6), we have θ n(ΠC ) K(K − 1) . . . (K − n(Πn ) + 1) Pr(Πn ) = K C∈C n(ΠC )− S∈S n(ΠS ) C∈C θ n(ΠS ) S∈S n(ΠC ) θ Γ(nj,C + K ) j=1 nC (θ+i−1) i=1 n(ΠS ) θ Γ(nj,S + K ) j=1 nS (θ+i−1) i=1 Thus when K → ∞, we obtain (7) if n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) and 0 otherwise. We have n(Πn ) ≤ C∈C n(ΠC ) − S∈S n(ΠS ) for any Πn ∈ Pn and the subset of Pn verifying G n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) corresponds to the set Pn . Example. Let the notation i ∼ j (resp. i j) indicates an edge (resp. no edge) between two sites. Let n = 3 and G be the decomposable graph defined by the relations 1 ∼ 2, 2 ∼ 3 and 1 3. G The set P3 is then equal to {{{1, 2, 3}}; {{1, 2}, {3}}; {{1}, {2, 3}}; {{1}, {2}, {3}}}. Note that G the partition {{1, 3}, {2}} does not belong to P3 . Indeed, as there is no edge between 1 and 3, they cannot be in the same cluster if 2 is in another cluster. The cliques are C1 = {1, 2} and C2 = {2, 3} Pr(ΠC1 ) Pr(ΠC2 ) hence we can and the separator is S2 = {2}. The distribution is given by Pr(Π3 ) = Pr(ΠS ) 2 check that we obtain Pr({1, 2, 3}) = (θ + 1)−2 , Pr({1, 2}, {3}) = Pr({1, 2}, {3}) = θ(θ + 1)−2 and Pr({1}, {2}, {3}) = θ2 (θ + 1)−2 . Let now define the full conditional distributions. Based on (7) the conditional assignment of an item k is proportional to the conditional over the cliques divided by the conditional over the separators. G Let denote G−k the undirected graph obtained by removing vertex k from G. Suppose that Πn ∈ Pn . G−k If Π−k ∈ Pn−1 , then do not change the value of item k. Otherwise, item k is assigned to cluster j / where j = 1, . . . , n(Π−k ) with probability proportional to {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (8) and to a new cluster with probability proportional to θ, where n−k,j,C is the number of items in the set C \ {k} belonging to cluster j. The updating process is illustrated by the Chinese wedding party process1 in Fig. 1. The results of this section can be extended to the Pitman-Yor process, and more generally to species sampling models. Example (continuing). Given Π−2 = {A1 = {1}, A2 = {3}}, we have −1 Pr( item 2 assigned to A1 = {1}| Π−2 ) = Pr( item 2 assigned to A2 = {3}| Π−2 ) = (θ + 2) −1 and Pr( item 2 assigned to new cluster A3 | Π−2 ) = θ (θ + 2) . Given Π−2 = {A1 = {1, 3}}, item 2 is assigned to A1 with probability 1. 1 Note that this representation describes the full conditionals while the CRP represents the sequential updat- ing. 3 (a) (b) (d) (c) (e) Figure 1: Chinese wedding party. Consider a group of n guests attending a wedding party. Each of the n guests may belong to one or several cliques, i.e. maximal groups of people such that everybody knows everybody. The belonging of each guest to the different cliques is represented by color patches on the figures, and the graphical representation of the relationship between the guests is represented by the graphical model (e). (a) Suppose that the guests are already seated such that two guests cannot be together at the same table is they are not part of the same clique, or if there does not exist a group of other guests such that they are related (“Any friend of yours is a friend of mine”). (b) The guest number k leaves his table and either (c) joins a table where there are guests from the same clique as him, with probability proportional to the product of the number of guests from each clique over the product of the number of guests belonging to several cliques on that table or (d) he joins a new table with probability proportional to θ. 2.3 Monte Carlo inference 2.3.1 MCMC algorithm Using the full conditionals, a single site Gibbs sampler can easily be designed to approximate the posterior distribution Pr(Πn |z1:n ). Given a partition Πn , an item k is taken out of the partition. If G−k Π−k ∈ Pn−1 , item k keeps the same value. Otherwise, the item will be assigned to a cluster j, / j = 1, . . . , n(Π−k ), with probability proportional to p(z{k}∪Aj,−k ) × p(zAj,−k ) {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (9) and the item will be assigned to a new cluster with probability proportional to p(z{k} ) × θ. Similarly to [3], we can also define a procedure to sample from p(θ|n(Πn ) = k)). We assume that θ ∼ G(a, b) and use p auxiliary variables x1 , . . . , xp . The procedure is as follows. • For j = 1, . . . , p, sample xj |k, θ ∼ Beta(θ + nSj , nCj − nSj ) • Sample θ|k, x1:p ∼ G(a + k, b − j log xj ) 2.3.2 Sequential Monte Carlo We have so far only treated the case of an undirected decomposable graph G. We can formulate a sequential updating rule for the corresponding perfect directed version D of G. Indeed, let (a1 , . . . a|V | ) be a perfect ordering and pa(ak ) be the set of parents of ak which is by definition complete. Let Πk−1 = {A1,k−1 , . . . , An(Πk−1 ),k−1 } denote the partition of the first k−1 vertices a1:k−1 and let nj,pa(ak ) be the number of elements with value j in the set pa(ak ), j = 1, . . . , n(Πk−1 ). Then the vertex ak joins the set j with probability nj,pa(ak ) / θ + cluster with probability θ/ θ + q q nq,pa(ak ) and creates a new nq,pa(ak ) . One can then design a particle filter/SMC method in a similar fashion as [4]. Consider a set of (i) (i) (i) (i) N N particles Πk−1 with weights wk−1 ∝ Pr(Πk−1 , z1:k−1 ) ( i=1 wk−1 = 1) that approximate (i) the posterior distribution Pr(Πk−1 |z1:k−1 ). For each particle i, there are n(Πk−1 ) + 1 possible 4 (i,j) allocations for component ak . We denote Πk the partition obtained by associating component ak (i,j) to cluster j. The weight associated to Πk is given by  nj,pa(ak ) (i)  if j = 1, . . . , n(Πk−1 ) θ+ q nq,pa(ak ) (i,j) (i) p(z{ak }∪Aj,k−1 ) wk−1 = wk−1 × (10) (i) θ  θ+ n p(zAj,k−1 ) if j = n(Πk−1 ) + 1 q q,pa(ak ) (i,j) Then we can perform a deterministic resampling step by keeping the N particles Πk with highest (i,j) (i) (i) weights wk−1 . Let Πk be the resampled particles and wk the associated normalized weights. 3 Prior distributions for infinite binary matrices on decomposable graphs Assume we have n objects; each of these objects being associated to the vertex of a graph G. To K each object is associated a K-dimensional binary vector zn = (zn,1 , . . . , zn,K ) ∈ {0, 1} where zn,i = 1 if object n possesses feature i and zn,i = 0 otherwise. These vectors zt form a binary n × K matrix denoted Z1:n . We denote by ξ1:n the associated equivalence class of left-ordered matrices and let EK be the set of left-ordered matrices with at most K features. In the standard case where the graph G is complete, we review briefly here two popular prior distributions on Z1:n , equivalently on ξ1:n : the Beta-Bernoulli model and the IBP [5]. We then extend these models to undirected decomposable graphs. This can be used for example to define a time-varying IBP as illustrated in Section 4. 3.1 Beta-Bernoulli and IBP distributions The Beta-Bernoulli distribution over the allocation Z1:n is K Pr(Z1:n ) = α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) α K Γ(nj j=1 (11) where nj is the number of objects having feature j. It follows that Pr(ξ1:n ) = K K! 2n −1 h=0 α K Γ(nj α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) Kh ! j=1 (12) where Kh is the number of features possessing the history h (see [5] for details). The nonparametric model is obtained by taking the limit when K → ∞ Pr(ξ1:n ) = αK K+ + 2n −1 h=1 Kh ! exp(−αHn ) where K + is the total number of features and Hn = 3.2 (n − nj )!(nj − 1)! n! j=1 n 1 k=1 k . (13) The IBP follows from (13). Markov combination of Beta-Bernoulli and IBP distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. As in the Dirichlet-multinomial case, it is easily seen that if for each clique C ∈ C, the marginal distribution is defined by (11), then these distributions are consistent as they yield the same distribution (11) over the separators. Therefore, the unique Markov distribution over G with Beta-Bernoulli distribution over the cliques is defined by [8] Pr(ZC ) S∈S Pr(ZS ) C∈C Pr(Z1:n ) = (14) where Pr(ZB ) given by (11) for each complete set B ⊆ G. The prior over ξ1:n is thus given, for ξ1:n ∈ EK , by Pr(ξ1:n ) = K! 2n −1 h=0 Kh ! α K α Γ(nj,C + K )Γ(nC −nj,C +1) α Γ(nC +1+ K ) α α Γ(nj,S + K )Γ(nS −nj,S +1) K K α j=1 Γ(nS +1+ K ) K j=1 C∈C S∈S 5 (15) where for each complete set B ⊆ G, nj,B is the number of items having feature j, j = 1, . . . , K in the set B and nB is the whole set of objects in set B. Taking the limit when K → ∞, we obtain after a few calculations Pr(ξ1:n ) = α + K[n] exp [−α ( C HnC − 2n −1 h=1 Kh ! HnS )] × C∈C + KC (nC −nj,C )!(nj,C −1)! j=1 nC ! S∈S S + KS (nS −nj,S )!(nj,S −1)! j=1 nS ! + + + + if K[n] = C KC − S KS and 0 otherwise, where KB is the number of different features possessed by objects in B. G Let En be the subset of En such that for each decomposition A, B and any (u, v) ∈ A × B: {u and v possess feature j} ⇒ ∃k ∈ A ∩ B such that {k possesses feature j}. Let ξ−k be the left-ordered + matrix obtained by removing object k from ξn and K−k be the total number of different features in G−k + ξ−k . For each feature j = 1, . . . , K−k , if ξ−k ∈ En−1 then we have   b C∈C nj,C if i = 1 S∈C nj,S Pr(ξk,j = i) = (16)  b C∈C (nC −nj,C ) if i = 0 (nS −nj,S ) S∈C nS where b is the appropriate normalizing constant then the customer k tries Poisson α {S∈S|k∈S} nC {C∈C|k∈C} new dishes. We can easily generalize this construction to a directed version D of G using arguments similar to those presented in Section 2; see Section 4 for an application to time-varying matrix factorization. 4 4.1 Applications Sharing clusters among relative groups: An alternative to HDP Consider that we are given d groups with nj data yi,j in each group, i = 1, . . . , nj , j = 1, . . . , d. We consider latent cluster variables zi,j that define the partition of the data. We will use alternatively the notation θi,j = Uzi,j in the following. Hierarchical Dirichlet Process [12] (HDP) is a very popular model for sharing clusters among related groups. It is based on a hierarchy of DPs G0 ∼ DP (γ, H), Gj |G0 ∼ DP (α, G0 ) j = 1, . . . d θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj . Under conjugacy assumptions, G0 , Gj and U can be integrated out and we can approximate the marginal posterior of (zi,j ) given y = (yi,j ) with Gibbs sampling using the Chinese restaurant franchise to sample from the full conditional p(zi,j |z−{i,j} , y). Using the graph formulation defined in Section 2, we propose an alternative to HDP. Let θ0,1 , . . . , θ0,N be N auxiliary variables belonging to what we call group 0. We define each clique Cj (j = 1, . . . , d) to be composed of elements from group j and elements from group 0. This defines a decomposable graphical model whose separator is given by the elements of group 0. We can rewrite the model in a way quite similar to HDP G0 ∼ DP (α, H), θ0,i |G0 ∼ G0 i = 1, ..., N α α Gj |θ0,1 , . . . , θ0,N ∼ DP (α + N, α+N H + α+N θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj N i=1 δθ0,i ) j = 1, . . . d, N For any subset A and j = k ∈ {1, . . . , p} we have corr(Gj (A), Gk (A)) = α+N . Again, under conjugacy conditions, we can integrate out G0 , Gj and U and approximate the marginal posterior distribution over the partition using the Chinese wedding party process defined in Section 2. Note that for latent variables zi,j , j = 1, . . . , d, associated to data, this is the usual CRP update. As in HDP, multiple layers can be added to the model. Figures 2 (a) and (b) resp. give the graphical DP alternative to HDP and 2-layer HDP. 6 z0 root z0 root corpora docs z1 z2 z1 z2 z3 z1,1 z1,2 z2,1 z2,2 z2,3 docs (a) Graphical DP alternative to HDP (b) Graphical DP alternative to 2-layer HDP Figure 2: Hierarchical Graphs of dependency with (a) one layer and (b) two layers of hierarchy. If N = 0, then Gj ∼ DP (α, H) for all j and this is equivalent to setting γ → ∞ in HDP. If N → ∞ then Gj = G0 for all j, G0 ∼ DP (α, H). This is equivalent to setting α → ∞ in the HDP. One interesting feature of the model is that, contrary to HDP, the marginal distribution of Gj at any layer of the tree is DP (α, H). As a consequence, the total number of clusters scales logarithmically (as in the usual DP) with the size of each group, whereas it scales doubly logarithmically in HDP. Contrary to HDP, there are at most N clusters shared between different groups. Our model is in that sense reminiscent of [9] where only a limited number of clusters can be shared. Note however that contrary to [9] we have a simple CRP-like process. The proposed methodology can be straightforwardly extended to the infinite HMM [12]. The main issue of the proposed model is the setting of the number N of auxiliary parameters. Another issue is that to achieve high correlation, we need a large number of auxiliary variables. Nonetheless, the computational time used to sample from auxiliary variables is negligible compared to the time used for latent variables associated to data. Moreover, it can be easily parallelized. The model proposed offers a far richer framework and ensures that at each level of the tree, the marginal distribution of the partition is given by a DP partition model. 4.2 Time-varying matrix factorization Let X1:n be an observed matrix of dimension n × D. We want to find a representation of this matrix in terms of two latent matrices Z1:n of dimension n × K and Y of dimension K × D. Here Z1:n 2 is a binary matrix whereas Y is a matrix of latent features. By assuming that Y ∼ N 0, σY IK×D and 2 X1:n = Z1:n Y + σX εn where εn ∼ N 0, σX In×D , we obtain p(X1:n |Z1:n ) ∝ −D/2 2 2 + Z+T Z+ + σX /σY IKn 1:n 1:n + (n−Kn )D σX exp − + Kn D σY 2 2 + where Σ−1 = I − Z+ Z+T Z+ + σX /σY IKn n 1:n 1:n 1:n −1 1 T −1 2 tr X1:n Σn X1:n 2σX (17) + Z+T , Kn the number of non-zero columns of 1:n + Z1:n and Z+ is the first Kn columns of Z1:n . To avoid having to set K, [5, 14] assume that Z1:n 1:n follows an IBP. The resulting posterior distribution p(Z1:n |X1:n ) can be estimated through MCMC [5] or SMC [14]. We consider here a different model where the object Xt is assumed to arrive at time index t and we want a prior distribution on Z1:n ensuring that objects close in time are more likely to possess similar features. To achieve this, we consider the simple directed graphical model D of Fig. 3 where the site numbering corresponds to a time index in that case and a perfect numbering of D is (1, 2, . . .). The set of parents pa(t) is composed of the r preceding sites {{t − r}, . . . , {t − 1}}. The time-varying IBP to sample from p(Z1:n ) associated to this directed graph follows from (16) and proceeds as follows. At time t = 1 + new new • Sample K1 ∼Poisson(α), set z1,i = 1 for i = 1, ..., K1 and set K1 = Knew . At times t = 2, . . . , r n + new ∼Poisson( α ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( 1:t−1,k ) and Kt t t 7   ?  ? - t−r - t−r+1 - . . . - t−1 - t - t+1        6 6 Figure 3: Directed graph. At times t = r + 1, . . . , n n + α new ∼Poisson( r+1 ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( t−r:t−1,k ) and Kt r+1 + Here Kt is the total number of features appearing from time max(1, t − r) to t − 1 and nt−r:t−1,k the restriction of n1:t−1 to the r last customers. Using (17) and the prior distribution of Z1:n which can be sampled using the time-varying IBP described above, we can easily design an SMC method to sample from p(Z1:n |X1:n ). We do not detail it here. Note that contrary to [14], our algorithm does not require inverting a matrix whose dimension grows linearly with the size of the data but only a matrix of dimension r × r. In order to illustrate the model and SMC algorithm, we create 200 6 × 6 images using a ground truth Y consisting of 4 different 6 × 6 latent images. The 200 × 4 binary matrix was generated from Pr(zt,k = 1) = πt,k , where πt = ( .6 .5 0 0 ) if t = 1, . . . , 30, πt = ( .4 .8 .4 0 ) if t = 31, . . . , 50 and πt = ( 0 .3 .6 .6 ) if t = 51, . . . , 200. The order of the model is set to r = 50. The feature occurences Z1:n and true features Y and their estimates are represented in Figure 4. Two spurious features are detected by the model (features 2 and 5 on Fig. 3(c)) but quickly discarded (Fig. 4(d)). The algorithm is able to correctly estimate the varying prior occurences of the features over time. Feature1 Feature2 Feature1 Feature2 Feature3 20 20 40 40 60 60 Feature4 80 100 Feature4 Feature5 Feature6 Time Feature3 Time 80 100 120 120 140 140 160 160 180 200 180 1 2 3 200 4 Feature (a) 1 2 3 4 5 6 Feature (b) (c) (d) Figure 4: (a) True features, (b) True features occurences, (c) MAP estimate ZM AP and (d) associated E[Y|ZM AP ] t=20 t=50 t=20 t=50 t=100 t=200 t=100 t=200 (a) (b) Figure 5: (a) E[Xt |πt , Y] and (b) E[Xt |X1:t−1 ] at t = 20, 50, 100, 200. 5 Related work and Discussion The fixed-lag version of the time-varying DP of Caron et al. [1] is a special case of the proposed model when G is given by Fig. 3. The bivariate DP of Walker and Muliere [13] is also a special case when G has only two cliques. In this paper, we have assumed that the structure of the graph was known beforehand and we have shown that many flexible models arise from this framework. It would be interesting in the future to investigate the case where the graphical structure is unknown and must be estimated from the data. Acknowledgment The authors thank the reviewers for their comments that helped to improve the writing of the paper. 8 References [1] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In Uncertainty in Artificial Intelligence, 2007. [2] A.P. Dawid and S.L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21:1272–1317, 1993. [3] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995. [4] P. Fearnhead. Particle filters for mixture models with an unknown number of components. Statistics and Computing, 14:11–21, 2004. [5] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 2006. [6] D. Heinz. Building hyper dirichlet processes for graphical models. Electonic Journal of Statistics, 3:290–315, 2009. [7] J.F.C. Kingman. Random partitions in population genetics. Proceedings of the Royal Society of London, 361:1–20, 1978. [8] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [9] P. M¨ ller, F. Quintana, and G. Rosner. A method for combining inference across related nonu parametric Bayesian models. Journal of the Royal Statistical Society B, 66:735–749, 2004. [10] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [11] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability theory and related fields, 102:145–158, 1995. [12] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581, 2006. [13] S. Walker and P. Muliere. A bivariate Dirichlet process. Statistics and Probability Letters, 64:1–7, 2003. [14] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems, 2007. 9

4 0.14719763 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

Author: Kurt Miller, Michael I. Jordan, Thomas L. Griffiths

Abstract: As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets. 1

5 0.13895045 114 nips-2009-Indian Buffet Processes with Power-law Behavior

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

6 0.13806406 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

7 0.10893483 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

8 0.097099543 192 nips-2009-Posterior vs Parameter Sparsity in Latent Variable Models

9 0.079820722 240 nips-2009-Sufficient Conditions for Agnostic Active Learnable

10 0.067571498 84 nips-2009-Evaluating multi-class learning strategies in a generative hierarchical framework for object detection

11 0.067471683 49 nips-2009-Breaking Boundaries Between Induction Time and Diagnosis Time Active Information Acquisition

12 0.058398098 255 nips-2009-Variational Inference for the Nested Chinese Restaurant Process

13 0.055642623 59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations

14 0.052579071 171 nips-2009-Nonparametric Bayesian Models for Unsupervised Event Coreference Resolution

15 0.048731424 167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

16 0.044160973 205 nips-2009-Rethinking LDA: Why Priors Matter

17 0.042421248 2 nips-2009-3D Object Recognition with Deep Belief Nets

18 0.042284057 175 nips-2009-Occlusive Components Analysis

19 0.041932181 226 nips-2009-Spatial Normalized Gamma Processes

20 0.041869931 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.148), (1, -0.084), (2, -0.04), (3, -0.143), (4, 0.087), (5, -0.182), (6, 0.104), (7, 0.007), (8, -0.049), (9, -0.02), (10, 0.013), (11, -0.019), (12, -0.112), (13, -0.055), (14, -0.21), (15, 0.069), (16, 0.223), (17, -0.011), (18, 0.168), (19, 0.09), (20, -0.031), (21, 0.032), (22, 0.052), (23, 0.002), (24, 0.065), (25, -0.06), (26, -0.12), (27, -0.023), (28, -0.01), (29, 0.037), (30, -0.144), (31, -0.03), (32, 0.119), (33, -0.003), (34, -0.005), (35, 0.037), (36, -0.028), (37, -0.089), (38, -0.142), (39, 0.178), (40, 0.014), (41, 0.091), (42, 0.018), (43, 0.048), (44, 0.099), (45, -0.033), (46, -0.045), (47, 0.02), (48, -0.06), (49, -0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93586898 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

2 0.80023193 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

Author: Finale Doshi-velez, Shakir Mohamed, Zoubin Ghahramani, David A. Knowles

Abstract: Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models, scales to datasets orders of magnitude larger than have previously been possible. 1

3 0.78702915 114 nips-2009-Indian Buffet Processes with Power-law Behavior

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

4 0.62655377 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

Author: Kurt Miller, Michael I. Jordan, Thomas L. Griffiths

Abstract: As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets. 1

5 0.56220675 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

Author: Alan S. Willsky, Erik B. Sudderth, Michael I. Jordan, Emily B. Fox

Abstract: We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. The size of the set and the sharing pattern are both inferred from data. We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data.

6 0.53790051 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

7 0.45826322 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

8 0.41402867 49 nips-2009-Breaking Boundaries Between Induction Time and Diagnosis Time Active Information Acquisition

9 0.4124105 59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations

10 0.40470952 171 nips-2009-Nonparametric Bayesian Models for Unsupervised Event Coreference Resolution

11 0.36412358 192 nips-2009-Posterior vs Parameter Sparsity in Latent Variable Models

12 0.34329116 226 nips-2009-Spatial Normalized Gamma Processes

13 0.31837547 186 nips-2009-Parallel Inference for Latent Dirichlet Allocation on Graphics Processing Units

14 0.31487045 240 nips-2009-Sufficient Conditions for Agnostic Active Learnable

15 0.29941657 155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization

16 0.27280778 233 nips-2009-Streaming Pointwise Mutual Information

17 0.25734898 84 nips-2009-Evaluating multi-class learning strategies in a generative hierarchical framework for object detection

18 0.25064102 42 nips-2009-Bayesian Sparse Factor Models and DAGs Inference and Comparison

19 0.24240643 39 nips-2009-Bayesian Belief Polarization

20 0.24226674 197 nips-2009-Randomized Pruning: Efficiently Calculating Expectations in Large Dynamic Programs


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.305), (7, 0.013), (24, 0.043), (25, 0.055), (35, 0.063), (36, 0.086), (39, 0.037), (58, 0.05), (61, 0.021), (66, 0.027), (71, 0.075), (81, 0.026), (86, 0.072), (91, 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.83963174 50 nips-2009-Canonical Time Warping for Alignment of Human Behavior

Author: Feng Zhou, Fernando Torre

Abstract: Alignment of time series is an important problem to solve in many scientific disciplines. In particular, temporal alignment of two or more subjects performing similar activities is a challenging problem due to the large temporal scale difference between human actions as well as the inter/intra subject variability. In this paper we present canonical time warping (CTW), an extension of canonical correlation analysis (CCA) for spatio-temporal alignment of human motion between two subjects. CTW extends previous work on CCA in two ways: (i) it combines CCA with dynamic time warping (DTW), and (ii) it extends CCA by allowing local spatial deformations. We show CTW’s effectiveness in three experiments: alignment of synthetic data, alignment of motion capture data of two subjects performing similar actions, and alignment of similar facial expressions made by two people. Our results demonstrate that CTW provides both visually and qualitatively better alignment than state-of-the-art techniques based on DTW. 1

same-paper 2 0.72953832 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

3 0.5963403 199 nips-2009-Ranking Measures and Loss Functions in Learning to Rank

Author: Wei Chen, Tie-yan Liu, Yanyan Lan, Zhi-ming Ma, Hang Li

Abstract: Learning to rank has become an important research topic in machine learning. While most learning-to-rank methods learn the ranking functions by minimizing loss functions, it is the ranking measures (such as NDCG and MAP) that are used to evaluate the performance of the learned ranking functions. In this work, we reveal the relationship between ranking measures and loss functions in learningto-rank methods, such as Ranking SVM, RankBoost, RankNet, and ListMLE. We show that the loss functions of these methods are upper bounds of the measurebased ranking errors. As a result, the minimization of these loss functions will lead to the maximization of the ranking measures. The key to obtaining this result is to model ranking as a sequence of classification tasks, and define a so-called essential loss for ranking as the weighted sum of the classification errors of individual tasks in the sequence. We have proved that the essential loss is both an upper bound of the measure-based ranking errors, and a lower bound of the loss functions in the aforementioned methods. Our proof technique also suggests a way to modify existing loss functions to make them tighter bounds of the measure-based ranking errors. Experimental results on benchmark datasets show that the modifications can lead to better ranking performances, demonstrating the correctness of our theoretical analysis. 1

4 0.49875364 187 nips-2009-Particle-based Variational Inference for Continuous Systems

Author: Andrew Frank, Padhraic Smyth, Alexander T. Ihler

Abstract: Since the development of loopy belief propagation, there has been considerable work on advancing the state of the art for approximate inference over distributions defined on discrete random variables. Improvements include guarantees of convergence, approximations that are provably more accurate, and bounds on the results of exact inference. However, extending these methods to continuous-valued systems has lagged behind. While several methods have been developed to use belief propagation on systems with continuous values, recent advances for discrete variables have not as yet been incorporated. In this context we extend a recently proposed particle-based belief propagation algorithm to provide a general framework for adapting discrete message-passing algorithms to inference in continuous systems. The resulting algorithms behave similarly to their purely discrete counterparts, extending the benefits of these more advanced inference techniques to the continuous domain. 1

5 0.49316084 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

Author: Khashayar Rohanimanesh, Sameer Singh, Andrew McCallum, Michael J. Black

Abstract: Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these samplingbased methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL). Rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated as a log-linear function approximator and learned with methods of temporal difference (TD); MAP inference is performed by executing the resulting policy on held out test data. Our method allows efficient gradient updates since only factors in the neighborhood of variables affected by an action need to be computed—we bypass the need to compute marginals entirely. Our method yields dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain. 1

6 0.4915112 132 nips-2009-Learning in Markov Random Fields using Tempered Transitions

7 0.48973224 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

8 0.48756135 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

9 0.48726088 260 nips-2009-Zero-shot Learning with Semantic Output Codes

10 0.48569858 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

11 0.48500723 113 nips-2009-Improving Existing Fault Recovery Policies

12 0.48481268 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models

13 0.48448202 204 nips-2009-Replicated Softmax: an Undirected Topic Model

14 0.48260292 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

15 0.48193818 19 nips-2009-A joint maximum-entropy model for binary neural population patterns and continuous signals

16 0.48102215 3 nips-2009-AUC optimization and the two-sample problem

17 0.48100609 226 nips-2009-Spatial Normalized Gamma Processes

18 0.48097408 129 nips-2009-Learning a Small Mixture of Trees

19 0.47993183 97 nips-2009-Free energy score space

20 0.47965783 169 nips-2009-Nonlinear Learning using Local Coordinate Coding