nips nips2009 nips2009-114 knowledge-graph by maker-knowledge-mining

114 nips-2009-Indian Buffet Processes with Power-law Behavior


Source: pdf

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. [sent-4, score-0.376]

2 We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. [sent-6, score-0.499]

3 We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). [sent-7, score-0.354]

4 We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. [sent-8, score-0.196]

5 1 Introduction The Indian buffet process (IBP) is an infinitely exchangeable distribution over binary matrices with a finite number of rows and an unbounded number of columns [1, 2]. [sent-9, score-0.277]

6 Using the usual analogy of customers entering an Indian buffet restaurant and sequentially choosing dishes from an infinitely long buffet counter, our generalization with parameters α > 0, c > −σ and σ ∈ [0, 1) is simply as follows: • Customer 1 tries Poisson(α) dishes. [sent-13, score-0.634]

7 • Subsequently, customer n + 1: – tries dish k with probability mk −σ , for each dish that has previously been tried; n+c Γ(1+c)Γ(n+c+σ) – tries Poisson(α Γ(n+1+c)Γ(c+σ) ) new dishes. [sent-14, score-0.493]

8 where mk is the number of previous customers who tried dish k. [sent-15, score-0.455]

9 The dishes and the customers correspond to the columns and the rows of the binary matrix respectively, with an entry of the matrix being one if the corresponding customer tried the dish (and zero otherwise). [sent-16, score-0.539]

10 The mass parameter α controls the total number of dishes tried by the customers, the concentration parameter c controls the number of customers that will try each dish, and the stability exponent σ controls the power-law behavior of the process. [sent-17, score-0.606]

11 These examples are all based on the Pitman-Yor process [11, 12, 13], a generalization of the Dirichlet process [14] with power-law properties. [sent-21, score-0.239]

12 The approach we take in this paper is to first define the underlying de Finetti measure, then to derive the conditional distributions of Bernoulli process observations with the de Finetti measure integrated out. [sent-23, score-0.154]

13 It is a novel generalization of the beta process [15] (which is the de Finetti measure of the normal two-parameter IBP [16]) with characteristics reminiscent of the stable process [17, 11] (in turn related to the Pitman-Yor process). [sent-26, score-0.536]

14 In the following section we first give a brief description of completely random measures, a class of random measures which includes the stable-beta and the beta processes. [sent-28, score-0.287]

15 In Section 3 we introduce the stable-beta process, a three parameter generalization of the beta process and derive the powerlaw IBP based on the stable-beta process. [sent-29, score-0.331]

16 Based on the proposed model, in Section 4 we construct a model of word occurrences in a document corpus. [sent-30, score-0.169]

17 A completely random measure (CRM) µ over (Θ, Ω) is a random measure such that µ(A)⊥ ⊥µ(B) for all disjoint measurable subsets A, B ∈ Ω. [sent-35, score-0.196]

18 CRMs can always be decomposed into a sum of three independent parts: a (non-random) measure, an atomic measure with fixed atoms but random masses, and an atomic measure with random atoms and masses. [sent-38, score-0.764]

19 In this case we can write µ in the form, N µ= M uk δφk + k=1 vl δψl , (1) l=1 where uk , vl > 0 are the random masses, φk ∈ Θ are the fixed atoms, ψl ∈ Θ are the random atoms, and N, M ∈ N ∪ {∞}. [sent-40, score-0.876]

20 The random atoms and their weights {vl , ψl } are jointly drawn from a 2D Poisson process over (0, ∞] × Θ with some nonatomic rate measure Λ called the L´ vy measure. [sent-43, score-0.601]

21 If Θ (0,∞] Λ(du × dθ) = M ∗ < ∞ then the number of random atoms M in µ is Poisson distributed with mean M ∗ , otherwise there are an infinite number of random atoms. [sent-45, score-0.334]

22 The mass parameter controls the overall mass of the process and the base distribution gives the distribution over the random atom locations. [sent-49, score-0.394]

23 When σ = 0 the SBP does not have power-law behavior and reduces to a normal two-parameter beta process [15, 16]. [sent-53, score-0.326]

24 When c = 1 − σ the stable-beta process describes the random atoms with masses < 1 in a stable process [17, 11]. [sent-54, score-0.633]

25 The SBP is so named as it can be seen as a generalization of both the stable and the beta processes. [sent-55, score-0.254]

26 (4) The random measure µ is a SBP with no fixed atoms and with L´ vy measure (3), while Zi ∼ e BernoulliP(µ) is a Bernoulli process with mean µ [16]. [sent-62, score-0.652]

27 This is also a CRM: in a small neighborhood dθ around θ ∈ Θ it has a probability µ(dθ) of having a unit mass atom in dθ; otherwise it does not have an atom in dθ. [sent-63, score-0.279]

28 If µ has an atom at θ the probability of Zi having an atom at θ as well is µ({θ}). [sent-64, score-0.226]

29 If µ has a smooth component, say µ0 , Zi will have random atoms drawn from a Poisson process with rate measure µ0 . [sent-65, score-0.48]

30 In typical applications to featural models the atoms in Zi give the features associated with data item i, while the weights of the atoms in µ give the prior probabilities of the corresponding features occurring in a data item. [sent-66, score-0.644]

31 , Zn is still a CRM, but now including fixed atoms given by θ1 , . [sent-85, score-0.286]

32 Its updated ∗ L´ vy measure and the distribution of the mass at each fixed atom θk are, e ∗ µ|Z1 , . [sent-89, score-0.374]

33 Γ(mk − σ)Γ(n − mk + c + σ) Λn (du × dθ) =α (6a) (6b) Intuitively, the posterior is obtained as follows. [sent-93, score-0.177]

34 Secondly, ∗ µ must have fixed atoms at each θk since otherwise the probability that there will be atoms among ∗ ∗ Z1 , . [sent-95, score-0.594]

35 The posterior mass at θk is obtained by multiplying a Bernoulli ∗ mk n−mk (since there are mk occurrences of the atom θk among Z1 , . [sent-99, score-0.579]

36 Finally, outside of these K atoms there are no other atoms among Z1 , . [sent-103, score-0.594]

37 We can think of this as n observations of 0 among n iid Bernoulli variables, so a “likelihood” of (1 − u)n is multiplied into Λ0 (without normalization), giving the updated L´ vy measure in (6a). [sent-107, score-0.253]

38 e Let us inspect the distributions (6) of the fixed and random atoms in the posterior µ in turn. [sent-108, score-0.345]

39 The ∗ random mass at θk has a distribution Fnk which is simply a beta distribution with parameters (mk − σ, n − mk + c + σ). [sent-109, score-0.414]

40 This differs from the usual beta process in the subtraction of σ from mk and addition of σ to n − mk + c. [sent-110, score-0.604]

41 This is reminiscent of the Pitman-Yor generalization to the Dirichlet process [11, 12, 13], where a discount parameter is subtracted from the number of customers seated around each table, and added to the chance of sitting at a new table. [sent-111, score-0.299]

42 On the other hand, the L´ vy e measure of the random atoms of µ is still a L´ vy measure corresponding to an SBP with updated e parameters Γ(1 + c)Γ(n + c + σ) , Γ(n + 1 + c)Γ(c + σ) c ← c + n, α ←α 3 σ ←σ H ← H. [sent-112, score-0.706]

43 In summary, the posterior of µ is simply an independent sum of an SBP with updated parameters and of fixed atoms with beta distributed masses. [sent-117, score-0.536]

44 This is different from the beta process and again reminiscent of Pitman-Yor processes, where the posterior is also a sum of a Pitman-Yor process with updated parameters and fixed atoms with random masses, but not a Pitman-Yor process [11]. [sent-120, score-0.894]

45 In the next subsections we describe an Indian buffet process and a stick-breaking construction corresponding to the SBP. [sent-122, score-0.251]

46 Efficient inference techniques based on both representations for the beta process can be straightforwardly generalized to the SBP [1, 16, 21]. [sent-123, score-0.298]

47 2 The Stable-beta Indian Buffet Process We can derive an Indian buffet process (IBP) corresponding to the SBP by deriving, for each n, the distribution of Zn+1 conditioned on Z1 , . [sent-125, score-0.224]

48 This derivation is ∗ straightforward and follows closely that for the beta process [16]. [sent-129, score-0.298]

49 For each of the atoms θk the mk −σ ∗ posterior of µ(θk ) given Z1 , . [sent-130, score-0.463]

50 , Zn ] = mk − σ n+c (8) Metaphorically speaking, customer n + 1 tries dish k with probability mk −σ . [sent-140, score-0.486]

51 , Zn ] = 0 1 = uα 0 Γ(1 + c) u−1−σ (1 − u)n+c+σ−1 duH(dθ) Γ(1 − σ)Γ(c + σ) 1 Γ(1 + c) H(dθ) u−σ (1 − u)n+c+σ−1 du Γ(1 − σ)Γ(c + σ) 0 Γ(1 + c)Γ(n + c + σ) H(dθ) =α Γ(n + 1 + c)Γ(c + σ) =α (9) ∗ ∗ Since Zn+1 is completely random and H is smooth, the above shows that on Θ\{θ1 , . [sent-152, score-0.17]

52 In the IBP metaphor, this corresponds to customer n+1 trying new dishes, with each dish associated with a new draw from H. [sent-157, score-0.201]

53 The resulting Indian buffet process is as described in the introduction. [sent-158, score-0.224]

54 , Zn ) = exp −α i=1 (10) k=1 ∗ ∗ where there are K atoms (dishes) θ1 , . [sent-166, score-0.286]

55 , Zn with atom k appearing mk times, and h is the density of H. [sent-172, score-0.255]

56 terms in [1] are absent as we have to distinguish among these Kh dishes in assigning each of them a distinct atom (this ∗ also contributes the h(θk ) terms). [sent-175, score-0.308]

57 3 Stick-breaking constructions In this section we describe stick-breaking constructions for the SBP generalizing those for the beta process. [sent-181, score-0.274]

58 The first is based on the size-biased ordering of atoms induced by the IBP [16], while 4 the second is based on the inverse L´ vy measure method [22], and produces a sequence of random e atoms of strictly decreasing masses [21]. [sent-182, score-0.875]

59 The size-biased construction is straightforward: we use the IBP to generate the atoms (dishes) in the SBP; each time a dish is newly generated the atom is drawn from H and its mass from Fnk . [sent-183, score-0.602]

60 n=1 k=1 The inverse L´ vy measure is a general method of generating from a Poisson process with none uniform rate measure. [sent-191, score-0.291]

61 The L´ vy measure Λ0 of e the SBP factorizes into a product Λ0 (du × dθ) = L(du)H(dθ) of a σ-finite measure L(du) = Γ(1+c) α Γ(1−σ)Γ(c+σ) u−σ−1 (1−u)c+σ−1 du over (0, 1) and a probability measure H over Θ. [sent-194, score-0.408]

62 This implies that we can generate a sample {vl , ψl }∞ of the random atoms of µ and their masses by first saml=1 pling the masses {vl }∞ ∼ PoissonP(L) from a Poisson process on (0, 1) with rate measure L, and l=1 associating each vl with an iid draw ψl ∼ H [19]. [sent-195, score-1.059]

63 Transforming back with vl = T −1 (tl ), ∞ we have {vl }l=1 ∼ PoissonP(L). [sent-200, score-0.39]

64 Deriving the density of vl given vl−1 , we get: p(vl |vl−1 ) = dtl dvl Γ(1+c) −σ−1 p(tl |tl−1 ) = α Γ(1−σ)Γ(c+σ) vl (1−vl )c+σ−1 exp − vl−1 L(du) . [sent-208, score-0.78]

65 (13) vl In general these densities do not simplify and we have to resort to solving for T −1 (tl ) numerically. [sent-209, score-0.39]

66 In the stable process case when c = 1 − σ and σ = 0, the density of vl simplifies to: Γ(2−σ) −σ−1 p(vl | vl−1 ) = α Γ(1−σ)Γ(1) vl × exp −σ−1 = α(1 − σ)vl exp − − vl−1 vl Γ(2−σ) α Γ(1−σ)Γ(1) u−σ−1 du α(1−σ) −σ (vl σ −σ − vl−1 ) . [sent-212, score-1.417]

67 (14) −σ Doing a change of values to yl = vl , we get: p(yl |yl−1 ) = α 1−σ exp σ − α 1−σ (yl − yl−1 ) . [sent-213, score-0.459]

68 In this section we shall assume σ > 0 since the case σ = 0 reduces the SBP to the usual beta process with less interesting power-law properties. [sent-219, score-0.32]

69 2 "=0 3 10 number of dishes mean number of dishes tried 4 10 ! [sent-225, score-0.398]

70 5 10 10 3 10 2 10 2 10 1 10 1 10 0 0 10 0 10 2 4 10 10 number of customers 6 10 10 0 10 2 4 10 10 number of customers trying each dish Figure 1: Power-law properties of the stable-beta Indian buffet process. [sent-227, score-0.545]

71 Firstly, the total number of dishes tried by n customers is O(nσ ). [sent-228, score-0.363]

72 Secondly, the number of customers trying each dish follows a Zipf’s law [23]. [sent-230, score-0.286]

73 This is shown in the right panel of Figure 1, which plots the number of dishes Km versus the number of customers m trying each dish (that is, Km is the number of dishes k for which mk = m). [sent-231, score-0.774]

74 Asymptotically we can show that the proportion of dishes tried by m customers is O(m−1−σ ). [sent-232, score-0.363]

75 One aspect of the SBP which is not power-law is the number of dishes each customer tries. [sent-234, score-0.226]

76 4 Word Occurrence Models with Stable-beta Processes In this section we use the SBP as a model for word occurrences in document corpora. [sent-237, score-0.169]

77 Let Zi ({θ}) = 1 if word type θ occurs in document i and 0 otherwise, and let µ({θ}) be the occurrence probability of word type θ among the documents in the corpus. [sent-239, score-0.313]

78 We use the hierarchical model (4) with a SBP prior1 on µ and with each document modeled as a conditionally independent Bernoulli process draw. [sent-240, score-0.169]

79 We use the popularity of each word type across all 20 newsgroups as the base distribution2 : for each word type θ let nθ be the number of documents containing θ and let H({θ}) ∝ nθ . [sent-247, score-0.273]

80 In the first experiment we compared the SBP to the beta process by fitting the parameters α, c and σ of both models to each newsgroup by maximum likelihood (in beta process case σ is fixed at 0) . [sent-248, score-0.627]

81 In comparison, the parameters values obtained by the beta process are α = 147. [sent-257, score-0.298]

82 Note that the estimated values for c are significantly larger than for the SBP to allow the beta process to model the fact that many words occur in a small number of documents (a consequence of the power-law 1 Words are discrete objects. [sent-262, score-0.366]

83 6 12000 BP SBP DATA BP SBP DATA 3 10 number of words cumulative number of words 14000 10000 8000 6000 2 10 1 10 4000 2000 0 100 200 300 400 number of documents 500 10 0 10 1 2 10 10 number of documents per word Figure 2: Power-law properties of the 20newsgroups dataset. [sent-267, score-0.233]

84 The dashed lines are the means of the word distributions generated by the ML parameters for the beta process (pink) and the SBP (green). [sent-269, score-0.377]

85 Table 1: Classification performance of SBP and beta process (BP). [sent-270, score-0.298]

86 The SBP has a much better fit than the beta process to the power-law properties of the corpora. [sent-315, score-0.298]

87 The rank j classification performance is defined to be the percentage of documents where the true label is among the top j predicted classes (as determined by the IBP conditional probabilities of the documents under each of the 20 newsgroup classes). [sent-318, score-0.189]

88 To see the effect of sample size on model performance we tried splitting the documents in each newsgroup into 20% training, 20% validation and 60% test sets, and into 60% training, 20% validation and 20% test sets. [sent-320, score-0.185]

89 Figure 3 shows that the SBP model has generally higher classification performances than the beta process. [sent-323, score-0.195]

90 The stable-beta process is a generalization of the beta process, and can be used in nonparametric Bayesian featural models with an unbounded number of features. [sent-325, score-0.454]

91 As opposed to the beta process, the stable-beta process has a number of appealing power-law properties. [sent-326, score-0.298]

92 We developed both an Indian buffet process and a stick-breaking construction for the stable-beta process and applied it to modeling word occurrences in document corpora. [sent-327, score-0.523]

93 7 −3 SBP−BP 6 x 10 4 2 0 −2 1 2 3 class order 4 5 Figure 3: Differences between the classification rates of the SBP and the beta process. [sent-329, score-0.195]

94 The performance of the SBP was consistently higher than that of the beta process for each of the five runs. [sent-330, score-0.298]

95 We derived the stable-beta process as a completely random measure with L´ vy measure (3). [sent-331, score-0.394]

96 Until this is resolved in the positive, we are not able to define hierarchical stable-beta processes generalizing the hierarchical beta processes [16]. [sent-334, score-0.34]

97 The expected number of dishes is, n n √ α Γ(1+c)Γ(n+c+σ) ∈ O Γ(n+1+c)Γ(c+σ) E[K] = i=1 2π(i+c+σ−1)((i+c+σ−1)/e)i+c+σ−1 √ 2π(i+c)((i+c)/e)i+c i=1 n n e−σ+1 (1 + =O σ−1 i+c (i i+c ) + c + σ − 1)σ−1 i=1 e−σ+1 eσ−1 iσ−1 =O = O(nσ ). [sent-339, score-0.173]

98 , Kn ), where Km is the number of dishes tried by exactly m customers and where there are a total of n customers in the restaurant. [sent-343, score-0.501]

99 , Kn ) is multinomial with the probability of a dish having m customers being proportional to the term in large parentheses. [sent-364, score-0.261]

100 Nonparametric Bayes estimators based on beta processes in models for life history data. [sent-458, score-0.226]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('sbp', 0.561), ('vl', 0.39), ('atoms', 0.286), ('ibp', 0.202), ('zn', 0.201), ('beta', 0.195), ('dishes', 0.173), ('mk', 0.142), ('customers', 0.138), ('vy', 0.137), ('dish', 0.123), ('buffet', 0.121), ('crm', 0.12), ('du', 0.118), ('atom', 0.113), ('process', 0.103), ('indian', 0.101), ('poisson', 0.099), ('masses', 0.091), ('word', 0.079), ('poissonp', 0.076), ('yl', 0.069), ('tl', 0.068), ('documents', 0.068), ('finetti', 0.066), ('km', 0.062), ('fnk', 0.061), ('occurrences', 0.055), ('featural', 0.054), ('customer', 0.053), ('mass', 0.053), ('tried', 0.052), ('measure', 0.051), ('bp', 0.05), ('duh', 0.046), ('nonparametric', 0.045), ('exponent', 0.041), ('kn', 0.036), ('nitely', 0.035), ('document', 0.035), ('zi', 0.035), ('bernoulli', 0.035), ('posterior', 0.035), ('grif', 0.034), ('generalization', 0.033), ('jn', 0.032), ('volume', 0.032), ('hierarchical', 0.031), ('ths', 0.031), ('newsgroup', 0.031), ('processes', 0.031), ('occurrence', 0.03), ('bernoullip', 0.03), ('crms', 0.03), ('vnk', 0.03), ('stability', 0.03), ('bayesian', 0.029), ('constructions', 0.029), ('exchangeable', 0.029), ('behavior', 0.028), ('completely', 0.028), ('marginalized', 0.028), ('construction', 0.027), ('tries', 0.026), ('base', 0.026), ('deriving', 0.026), ('stable', 0.026), ('reminiscent', 0.025), ('concentration', 0.025), ('trying', 0.025), ('pitman', 0.024), ('uk', 0.024), ('random', 0.024), ('unbounded', 0.024), ('fk', 0.024), ('iid', 0.023), ('thibaux', 0.023), ('controls', 0.022), ('usual', 0.022), ('among', 0.022), ('dyadic', 0.022), ('ml', 0.021), ('generalizing', 0.021), ('atomic', 0.021), ('newsgroups', 0.021), ('kh', 0.021), ('updated', 0.02), ('latent', 0.019), ('cumulative', 0.018), ('disjoint', 0.018), ('occurring', 0.018), ('firstly', 0.018), ('linguistics', 0.018), ('gatsby', 0.018), ('validation', 0.017), ('qn', 0.017), ('annals', 0.017), ('multiplying', 0.017), ('measures', 0.016), ('smooth', 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 114 nips-2009-Indian Buffet Processes with Power-law Behavior

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

2 0.17466025 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

Author: Alan S. Willsky, Erik B. Sudderth, Michael I. Jordan, Emily B. Fox

Abstract: We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. The size of the set and the sharing pattern are both inferred from data. We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data.

3 0.13946138 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

Author: Finale Doshi-velez, Shakir Mohamed, Zoubin Ghahramani, David A. Knowles

Abstract: Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models, scales to datasets orders of magnitude larger than have previously been possible. 1

4 0.13895045 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

5 0.10871107 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

Author: Kurt Miller, Michael I. Jordan, Thomas L. Griffiths

Abstract: As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets. 1

6 0.10215394 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

7 0.093052424 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

8 0.081645414 167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

9 0.078754708 205 nips-2009-Rethinking LDA: Why Priors Matter

10 0.071564637 226 nips-2009-Spatial Normalized Gamma Processes

11 0.062484909 59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations

12 0.053835213 4 nips-2009-A Bayesian Analysis of Dynamics in Free Recall

13 0.053623799 50 nips-2009-Canonical Time Warping for Alignment of Human Behavior

14 0.05066647 255 nips-2009-Variational Inference for the Nested Chinese Restaurant Process

15 0.047834259 65 nips-2009-Decoupling Sparsity and Smoothness in the Discrete Hierarchical Dirichlet Process

16 0.044165649 136 nips-2009-Learning to Rank by Optimizing NDCG Measure

17 0.042197332 190 nips-2009-Polynomial Semantic Indexing

18 0.038957197 204 nips-2009-Replicated Softmax: an Undirected Topic Model

19 0.038699374 82 nips-2009-Entropic Graph Regularization in Non-Parametric Semi-Supervised Classification

20 0.036778919 5 nips-2009-A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.122), (1, -0.053), (2, -0.036), (3, -0.12), (4, 0.1), (5, -0.181), (6, 0.045), (7, -0.022), (8, -0.022), (9, -0.001), (10, 0.041), (11, 0.006), (12, -0.101), (13, -0.045), (14, -0.188), (15, 0.0), (16, 0.084), (17, 0.018), (18, 0.107), (19, 0.143), (20, 0.001), (21, 0.005), (22, 0.047), (23, -0.076), (24, 0.02), (25, -0.01), (26, -0.039), (27, -0.002), (28, -0.094), (29, 0.034), (30, -0.085), (31, -0.004), (32, 0.033), (33, 0.029), (34, -0.064), (35, 0.006), (36, 0.037), (37, 0.034), (38, -0.078), (39, 0.102), (40, 0.038), (41, 0.032), (42, -0.007), (43, 0.06), (44, 0.119), (45, 0.026), (46, -0.044), (47, 0.097), (48, -0.002), (49, 0.042)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94746447 114 nips-2009-Indian Buffet Processes with Power-law Behavior

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

2 0.78540117 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

3 0.67075366 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

Author: Finale Doshi-velez, Shakir Mohamed, Zoubin Ghahramani, David A. Knowles

Abstract: Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models, scales to datasets orders of magnitude larger than have previously been possible. 1

4 0.66573417 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

Author: Alan S. Willsky, Erik B. Sudderth, Michael I. Jordan, Emily B. Fox

Abstract: We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. The size of the set and the sharing pattern are both inferred from data. We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data.

5 0.57794541 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

Author: Francois Caron, Arnaud Doucet

Abstract: Over recent years Dirichlet processes and the associated Chinese restaurant process (CRP) have found many applications in clustering while the Indian buffet process (IBP) is increasingly used to describe latent feature models. These models are attractive because they ensure exchangeability (over samples). We propose here extensions of these models where the dependency between samples is given by a known decomposable graph. These models have appealing properties and can be easily learned using Monte Carlo techniques. 1 Motivation The CRP and IBP have found numerous applications in machine learning over recent years [5, 10]. We consider here the case where the data we are interested in are ‘locally’ dependent; these dependencies being represented by a known graph G where each data point/object is associated to a vertex. These local dependencies can correspond to any conceptual or real (e.g. space, time) metric. For example, in the context of clustering, we might want to propose a prior distribution on partitions enforcing that data which are ‘close’ in the graph are more likely to be in the same cluster. Similarly, in the context of latent feature models, we might be interested in a prior distribution on features enforcing that data which are ‘close’ in the graph are more likely to possess similar features. The ‘standard’ CRP and IBP correspond to the case where the graph G is complete; that is it is fully connected. In this paper, we generalize the CRP and IBP to decomposable graphs. The resulting generalized versions of the CRP and IBP enjoy attractive properties. Each clique of the graph follows marginally a CRP or an IBP process and explicit expressions for the joint prior distribution on the graph is available. It makes it easy to learn those models using straightforward generalizations of Markov chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC) algorithms proposed to perform inference for the CRP and IBP [5, 10, 14]. The rest of the paper is organized as follows. In Section 2, we review the popular Dirichlet multinomial allocation model and the Dirichlet Process (DP) partition distribution. We propose an extension of these two models to decomposable graphical models. In Section 3 we discuss nonparametric latent feature models, reviewing briefly the construction in [5] and extending it to decomposable graphs. We demonstrate these models in Section 4 on two applications: an alternative to the hierarchical DP model [12] and a time-varying matrix factorization problem. 2 Prior distributions for partitions on decomposable graphs Assume we have n observations. When performing clustering, we associate to each of this observation an allocation variable zi ∈ [K] = {1, . . . , K}. Let Πn be the partition of [n] = {1, . . . , n} defined by the equivalence relation i ↔ j ⇔ zi = zj . The resulting partition Πn = {A1 , . . . , An(Πn ) } 1 is an unordered collection of disjoint non-empty subsets Aj of [n], j = 1, . . . , n(Πn ), where ∪j Aj = [n] and n(Πn ) is the number of subsets for partition Πn . We also denote by Pn be the set of all partitions of [n] and let nj , j = 1, . . . , n(Πn ), be the size of the subset Aj . Each allocation variable zi is associated to a vertex/site of an undirected graph G, which is assumed to be known. In the standard case where the graph G is complete, we first review briefly here two popular prior distributions on z1:n , equivalently on Πn . We then extend these models to undirected decomposable graphs; see [2, 8] for an introduction to decomposable graphs. Finally we briefly discuss the directed case. Note that the models proposed here are completely different from the hyper multinomial-Dirichlet in [2] and its recent DP extension [6]. 2.1 Dirichlet multinomial allocation model and DP partition distribution Assume for the time being that K is finite. When the graph is complete, a popular choice for the allocation variables is to consider a Dirichlet multinomial allocation model [11] θ θ , . . . , ), zi |π ∼ π (1) K K where D is the standard Dirichlet distribution and θ > 0. Integrating out π, we obtain the following Dirichlet multinomial prior distribution π ∼ D( Pr(z1:n ) = K j=1 Γ(θ) Γ(nj + θ K) (2) θ Γ(θ + n)Γ( K )K and then, using the straightforward equality Pr(Πn ) = PK where PK = {Πn ∈ Pn |n(Πn ) ≤ K}, we obtain K! (K−n(Πn ))! Pr(z1:n ) valid for for all Πn ∈ n(Π ) Pr(Πn ) = θ Γ(θ) j=1n Γ(nj + K ) K! . θ (K − n(Πn ))! Γ(θ + n)Γ( K )n(Πn ) (3) DP may be seen as a generalization of the Dirichlet multinomial model when the number of components K → ∞; see for example [10]. In this case the distribution over the partition Πn of [n] is given by [11] n(Π ) θn(Πn ) j=1n Γ(nj ) . (4) Pr(Πn ) = n i=1 (θ + i − 1) Let Π−k = {A1,−k , . . . , An(Π−k ),−k } be the partition induced by removing item k to Πn and nj,−k be the size of cluster j for j = 1, . . . , n(Π−k ). It follows from (4) that an item k is assigned to an existing cluster j, j = 1, . . . , n(Π−k ), with probability proportional to nj,−k / (n − 1 + θ) and forms a new cluster with probability θ/ (n − 1 + θ). This property is the basis of the CRP. We now extend the Dirichlet multinomial allocation and the DP partition distribution models to decomposable graphs. 2.2 Markov combination of Dirichlet multinomial and DP partition distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. It can be easily checked that if the marginal distribution of zC for each clique C ∈ C is defined by (2) then these distributions are consistent as they yield the same distribution (2) over the separators. Therefore, the unique Markov distribution over G with Dirichlet multinomial distribution over the cliques is defined by [8] Pr(zC ) S∈S Pr(zS ) C∈C Pr(z1:n ) = (5) where for each complete set B ⊆ G, we have Pr(zB ) given by (2). It follows that we have for any Πn ∈ PK Γ(θ) K! Pr(Πn ) = (K − n(Πn ))! C∈C Γ(θ) S∈S 2 K j=1 θ Γ(nj,C + K ) θ Γ(θ+nC )Γ( K )K K j=1 θ Γ(nj,S + K ) θ Γ(θ+nS )Γ( K )K (6) where for each complete set B ⊆ G, nj,B is the number of items associated to cluster j, j = 1, . . . , K in B and nB is the total number of items in B. Within each complete set B, the allocation variables define a partition distributed according to the Dirichlet-multinomial distribution. We now extend this approach to DP partition distributions; that is we derive a joint distribution over Πn such that the distribution of ΠB over each complete set B of the graph is given by (4) with θ > 0. Such a distribution satisfies the consistency condition over the separators as the restriction of any partition distributed according to (4) still follows (4) [7]. G Proposition. Let Pn be the set of partitions Πn ∈ Pn such that for each decomposition A, B, and any (i, j) ∈ A × B, i ↔ j ⇒ ∃k ∈ A ∩ B such that k ↔ i ↔ j. As K → ∞, the prior distribution G over partitions (6) is given for each Πn ∈ Pn by Pr(Πn ) = θn(Πn ) n(ΠC ) Γ(nj,C ) j=1 nC i=1 (θ+i−1) n(ΠS ) Γ(nj,S ) j=1 nS (θ+i−1) i=1 C∈C S∈S (7) where n(ΠB ) is the number of clusters in the complete set B. Proof. From (6), we have θ n(ΠC ) K(K − 1) . . . (K − n(Πn ) + 1) Pr(Πn ) = K C∈C n(ΠC )− S∈S n(ΠS ) C∈C θ n(ΠS ) S∈S n(ΠC ) θ Γ(nj,C + K ) j=1 nC (θ+i−1) i=1 n(ΠS ) θ Γ(nj,S + K ) j=1 nS (θ+i−1) i=1 Thus when K → ∞, we obtain (7) if n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) and 0 otherwise. We have n(Πn ) ≤ C∈C n(ΠC ) − S∈S n(ΠS ) for any Πn ∈ Pn and the subset of Pn verifying G n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) corresponds to the set Pn . Example. Let the notation i ∼ j (resp. i j) indicates an edge (resp. no edge) between two sites. Let n = 3 and G be the decomposable graph defined by the relations 1 ∼ 2, 2 ∼ 3 and 1 3. G The set P3 is then equal to {{{1, 2, 3}}; {{1, 2}, {3}}; {{1}, {2, 3}}; {{1}, {2}, {3}}}. Note that G the partition {{1, 3}, {2}} does not belong to P3 . Indeed, as there is no edge between 1 and 3, they cannot be in the same cluster if 2 is in another cluster. The cliques are C1 = {1, 2} and C2 = {2, 3} Pr(ΠC1 ) Pr(ΠC2 ) hence we can and the separator is S2 = {2}. The distribution is given by Pr(Π3 ) = Pr(ΠS ) 2 check that we obtain Pr({1, 2, 3}) = (θ + 1)−2 , Pr({1, 2}, {3}) = Pr({1, 2}, {3}) = θ(θ + 1)−2 and Pr({1}, {2}, {3}) = θ2 (θ + 1)−2 . Let now define the full conditional distributions. Based on (7) the conditional assignment of an item k is proportional to the conditional over the cliques divided by the conditional over the separators. G Let denote G−k the undirected graph obtained by removing vertex k from G. Suppose that Πn ∈ Pn . G−k If Π−k ∈ Pn−1 , then do not change the value of item k. Otherwise, item k is assigned to cluster j / where j = 1, . . . , n(Π−k ) with probability proportional to {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (8) and to a new cluster with probability proportional to θ, where n−k,j,C is the number of items in the set C \ {k} belonging to cluster j. The updating process is illustrated by the Chinese wedding party process1 in Fig. 1. The results of this section can be extended to the Pitman-Yor process, and more generally to species sampling models. Example (continuing). Given Π−2 = {A1 = {1}, A2 = {3}}, we have −1 Pr( item 2 assigned to A1 = {1}| Π−2 ) = Pr( item 2 assigned to A2 = {3}| Π−2 ) = (θ + 2) −1 and Pr( item 2 assigned to new cluster A3 | Π−2 ) = θ (θ + 2) . Given Π−2 = {A1 = {1, 3}}, item 2 is assigned to A1 with probability 1. 1 Note that this representation describes the full conditionals while the CRP represents the sequential updat- ing. 3 (a) (b) (d) (c) (e) Figure 1: Chinese wedding party. Consider a group of n guests attending a wedding party. Each of the n guests may belong to one or several cliques, i.e. maximal groups of people such that everybody knows everybody. The belonging of each guest to the different cliques is represented by color patches on the figures, and the graphical representation of the relationship between the guests is represented by the graphical model (e). (a) Suppose that the guests are already seated such that two guests cannot be together at the same table is they are not part of the same clique, or if there does not exist a group of other guests such that they are related (“Any friend of yours is a friend of mine”). (b) The guest number k leaves his table and either (c) joins a table where there are guests from the same clique as him, with probability proportional to the product of the number of guests from each clique over the product of the number of guests belonging to several cliques on that table or (d) he joins a new table with probability proportional to θ. 2.3 Monte Carlo inference 2.3.1 MCMC algorithm Using the full conditionals, a single site Gibbs sampler can easily be designed to approximate the posterior distribution Pr(Πn |z1:n ). Given a partition Πn , an item k is taken out of the partition. If G−k Π−k ∈ Pn−1 , item k keeps the same value. Otherwise, the item will be assigned to a cluster j, / j = 1, . . . , n(Π−k ), with probability proportional to p(z{k}∪Aj,−k ) × p(zAj,−k ) {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (9) and the item will be assigned to a new cluster with probability proportional to p(z{k} ) × θ. Similarly to [3], we can also define a procedure to sample from p(θ|n(Πn ) = k)). We assume that θ ∼ G(a, b) and use p auxiliary variables x1 , . . . , xp . The procedure is as follows. • For j = 1, . . . , p, sample xj |k, θ ∼ Beta(θ + nSj , nCj − nSj ) • Sample θ|k, x1:p ∼ G(a + k, b − j log xj ) 2.3.2 Sequential Monte Carlo We have so far only treated the case of an undirected decomposable graph G. We can formulate a sequential updating rule for the corresponding perfect directed version D of G. Indeed, let (a1 , . . . a|V | ) be a perfect ordering and pa(ak ) be the set of parents of ak which is by definition complete. Let Πk−1 = {A1,k−1 , . . . , An(Πk−1 ),k−1 } denote the partition of the first k−1 vertices a1:k−1 and let nj,pa(ak ) be the number of elements with value j in the set pa(ak ), j = 1, . . . , n(Πk−1 ). Then the vertex ak joins the set j with probability nj,pa(ak ) / θ + cluster with probability θ/ θ + q q nq,pa(ak ) and creates a new nq,pa(ak ) . One can then design a particle filter/SMC method in a similar fashion as [4]. Consider a set of (i) (i) (i) (i) N N particles Πk−1 with weights wk−1 ∝ Pr(Πk−1 , z1:k−1 ) ( i=1 wk−1 = 1) that approximate (i) the posterior distribution Pr(Πk−1 |z1:k−1 ). For each particle i, there are n(Πk−1 ) + 1 possible 4 (i,j) allocations for component ak . We denote Πk the partition obtained by associating component ak (i,j) to cluster j. The weight associated to Πk is given by  nj,pa(ak ) (i)  if j = 1, . . . , n(Πk−1 ) θ+ q nq,pa(ak ) (i,j) (i) p(z{ak }∪Aj,k−1 ) wk−1 = wk−1 × (10) (i) θ  θ+ n p(zAj,k−1 ) if j = n(Πk−1 ) + 1 q q,pa(ak ) (i,j) Then we can perform a deterministic resampling step by keeping the N particles Πk with highest (i,j) (i) (i) weights wk−1 . Let Πk be the resampled particles and wk the associated normalized weights. 3 Prior distributions for infinite binary matrices on decomposable graphs Assume we have n objects; each of these objects being associated to the vertex of a graph G. To K each object is associated a K-dimensional binary vector zn = (zn,1 , . . . , zn,K ) ∈ {0, 1} where zn,i = 1 if object n possesses feature i and zn,i = 0 otherwise. These vectors zt form a binary n × K matrix denoted Z1:n . We denote by ξ1:n the associated equivalence class of left-ordered matrices and let EK be the set of left-ordered matrices with at most K features. In the standard case where the graph G is complete, we review briefly here two popular prior distributions on Z1:n , equivalently on ξ1:n : the Beta-Bernoulli model and the IBP [5]. We then extend these models to undirected decomposable graphs. This can be used for example to define a time-varying IBP as illustrated in Section 4. 3.1 Beta-Bernoulli and IBP distributions The Beta-Bernoulli distribution over the allocation Z1:n is K Pr(Z1:n ) = α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) α K Γ(nj j=1 (11) where nj is the number of objects having feature j. It follows that Pr(ξ1:n ) = K K! 2n −1 h=0 α K Γ(nj α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) Kh ! j=1 (12) where Kh is the number of features possessing the history h (see [5] for details). The nonparametric model is obtained by taking the limit when K → ∞ Pr(ξ1:n ) = αK K+ + 2n −1 h=1 Kh ! exp(−αHn ) where K + is the total number of features and Hn = 3.2 (n − nj )!(nj − 1)! n! j=1 n 1 k=1 k . (13) The IBP follows from (13). Markov combination of Beta-Bernoulli and IBP distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. As in the Dirichlet-multinomial case, it is easily seen that if for each clique C ∈ C, the marginal distribution is defined by (11), then these distributions are consistent as they yield the same distribution (11) over the separators. Therefore, the unique Markov distribution over G with Beta-Bernoulli distribution over the cliques is defined by [8] Pr(ZC ) S∈S Pr(ZS ) C∈C Pr(Z1:n ) = (14) where Pr(ZB ) given by (11) for each complete set B ⊆ G. The prior over ξ1:n is thus given, for ξ1:n ∈ EK , by Pr(ξ1:n ) = K! 2n −1 h=0 Kh ! α K α Γ(nj,C + K )Γ(nC −nj,C +1) α Γ(nC +1+ K ) α α Γ(nj,S + K )Γ(nS −nj,S +1) K K α j=1 Γ(nS +1+ K ) K j=1 C∈C S∈S 5 (15) where for each complete set B ⊆ G, nj,B is the number of items having feature j, j = 1, . . . , K in the set B and nB is the whole set of objects in set B. Taking the limit when K → ∞, we obtain after a few calculations Pr(ξ1:n ) = α + K[n] exp [−α ( C HnC − 2n −1 h=1 Kh ! HnS )] × C∈C + KC (nC −nj,C )!(nj,C −1)! j=1 nC ! S∈S S + KS (nS −nj,S )!(nj,S −1)! j=1 nS ! + + + + if K[n] = C KC − S KS and 0 otherwise, where KB is the number of different features possessed by objects in B. G Let En be the subset of En such that for each decomposition A, B and any (u, v) ∈ A × B: {u and v possess feature j} ⇒ ∃k ∈ A ∩ B such that {k possesses feature j}. Let ξ−k be the left-ordered + matrix obtained by removing object k from ξn and K−k be the total number of different features in G−k + ξ−k . For each feature j = 1, . . . , K−k , if ξ−k ∈ En−1 then we have   b C∈C nj,C if i = 1 S∈C nj,S Pr(ξk,j = i) = (16)  b C∈C (nC −nj,C ) if i = 0 (nS −nj,S ) S∈C nS where b is the appropriate normalizing constant then the customer k tries Poisson α {S∈S|k∈S} nC {C∈C|k∈C} new dishes. We can easily generalize this construction to a directed version D of G using arguments similar to those presented in Section 2; see Section 4 for an application to time-varying matrix factorization. 4 4.1 Applications Sharing clusters among relative groups: An alternative to HDP Consider that we are given d groups with nj data yi,j in each group, i = 1, . . . , nj , j = 1, . . . , d. We consider latent cluster variables zi,j that define the partition of the data. We will use alternatively the notation θi,j = Uzi,j in the following. Hierarchical Dirichlet Process [12] (HDP) is a very popular model for sharing clusters among related groups. It is based on a hierarchy of DPs G0 ∼ DP (γ, H), Gj |G0 ∼ DP (α, G0 ) j = 1, . . . d θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj . Under conjugacy assumptions, G0 , Gj and U can be integrated out and we can approximate the marginal posterior of (zi,j ) given y = (yi,j ) with Gibbs sampling using the Chinese restaurant franchise to sample from the full conditional p(zi,j |z−{i,j} , y). Using the graph formulation defined in Section 2, we propose an alternative to HDP. Let θ0,1 , . . . , θ0,N be N auxiliary variables belonging to what we call group 0. We define each clique Cj (j = 1, . . . , d) to be composed of elements from group j and elements from group 0. This defines a decomposable graphical model whose separator is given by the elements of group 0. We can rewrite the model in a way quite similar to HDP G0 ∼ DP (α, H), θ0,i |G0 ∼ G0 i = 1, ..., N α α Gj |θ0,1 , . . . , θ0,N ∼ DP (α + N, α+N H + α+N θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj N i=1 δθ0,i ) j = 1, . . . d, N For any subset A and j = k ∈ {1, . . . , p} we have corr(Gj (A), Gk (A)) = α+N . Again, under conjugacy conditions, we can integrate out G0 , Gj and U and approximate the marginal posterior distribution over the partition using the Chinese wedding party process defined in Section 2. Note that for latent variables zi,j , j = 1, . . . , d, associated to data, this is the usual CRP update. As in HDP, multiple layers can be added to the model. Figures 2 (a) and (b) resp. give the graphical DP alternative to HDP and 2-layer HDP. 6 z0 root z0 root corpora docs z1 z2 z1 z2 z3 z1,1 z1,2 z2,1 z2,2 z2,3 docs (a) Graphical DP alternative to HDP (b) Graphical DP alternative to 2-layer HDP Figure 2: Hierarchical Graphs of dependency with (a) one layer and (b) two layers of hierarchy. If N = 0, then Gj ∼ DP (α, H) for all j and this is equivalent to setting γ → ∞ in HDP. If N → ∞ then Gj = G0 for all j, G0 ∼ DP (α, H). This is equivalent to setting α → ∞ in the HDP. One interesting feature of the model is that, contrary to HDP, the marginal distribution of Gj at any layer of the tree is DP (α, H). As a consequence, the total number of clusters scales logarithmically (as in the usual DP) with the size of each group, whereas it scales doubly logarithmically in HDP. Contrary to HDP, there are at most N clusters shared between different groups. Our model is in that sense reminiscent of [9] where only a limited number of clusters can be shared. Note however that contrary to [9] we have a simple CRP-like process. The proposed methodology can be straightforwardly extended to the infinite HMM [12]. The main issue of the proposed model is the setting of the number N of auxiliary parameters. Another issue is that to achieve high correlation, we need a large number of auxiliary variables. Nonetheless, the computational time used to sample from auxiliary variables is negligible compared to the time used for latent variables associated to data. Moreover, it can be easily parallelized. The model proposed offers a far richer framework and ensures that at each level of the tree, the marginal distribution of the partition is given by a DP partition model. 4.2 Time-varying matrix factorization Let X1:n be an observed matrix of dimension n × D. We want to find a representation of this matrix in terms of two latent matrices Z1:n of dimension n × K and Y of dimension K × D. Here Z1:n 2 is a binary matrix whereas Y is a matrix of latent features. By assuming that Y ∼ N 0, σY IK×D and 2 X1:n = Z1:n Y + σX εn where εn ∼ N 0, σX In×D , we obtain p(X1:n |Z1:n ) ∝ −D/2 2 2 + Z+T Z+ + σX /σY IKn 1:n 1:n + (n−Kn )D σX exp − + Kn D σY 2 2 + where Σ−1 = I − Z+ Z+T Z+ + σX /σY IKn n 1:n 1:n 1:n −1 1 T −1 2 tr X1:n Σn X1:n 2σX (17) + Z+T , Kn the number of non-zero columns of 1:n + Z1:n and Z+ is the first Kn columns of Z1:n . To avoid having to set K, [5, 14] assume that Z1:n 1:n follows an IBP. The resulting posterior distribution p(Z1:n |X1:n ) can be estimated through MCMC [5] or SMC [14]. We consider here a different model where the object Xt is assumed to arrive at time index t and we want a prior distribution on Z1:n ensuring that objects close in time are more likely to possess similar features. To achieve this, we consider the simple directed graphical model D of Fig. 3 where the site numbering corresponds to a time index in that case and a perfect numbering of D is (1, 2, . . .). The set of parents pa(t) is composed of the r preceding sites {{t − r}, . . . , {t − 1}}. The time-varying IBP to sample from p(Z1:n ) associated to this directed graph follows from (16) and proceeds as follows. At time t = 1 + new new • Sample K1 ∼Poisson(α), set z1,i = 1 for i = 1, ..., K1 and set K1 = Knew . At times t = 2, . . . , r n + new ∼Poisson( α ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( 1:t−1,k ) and Kt t t 7   ?  ? - t−r - t−r+1 - . . . - t−1 - t - t+1        6 6 Figure 3: Directed graph. At times t = r + 1, . . . , n n + α new ∼Poisson( r+1 ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( t−r:t−1,k ) and Kt r+1 + Here Kt is the total number of features appearing from time max(1, t − r) to t − 1 and nt−r:t−1,k the restriction of n1:t−1 to the r last customers. Using (17) and the prior distribution of Z1:n which can be sampled using the time-varying IBP described above, we can easily design an SMC method to sample from p(Z1:n |X1:n ). We do not detail it here. Note that contrary to [14], our algorithm does not require inverting a matrix whose dimension grows linearly with the size of the data but only a matrix of dimension r × r. In order to illustrate the model and SMC algorithm, we create 200 6 × 6 images using a ground truth Y consisting of 4 different 6 × 6 latent images. The 200 × 4 binary matrix was generated from Pr(zt,k = 1) = πt,k , where πt = ( .6 .5 0 0 ) if t = 1, . . . , 30, πt = ( .4 .8 .4 0 ) if t = 31, . . . , 50 and πt = ( 0 .3 .6 .6 ) if t = 51, . . . , 200. The order of the model is set to r = 50. The feature occurences Z1:n and true features Y and their estimates are represented in Figure 4. Two spurious features are detected by the model (features 2 and 5 on Fig. 3(c)) but quickly discarded (Fig. 4(d)). The algorithm is able to correctly estimate the varying prior occurences of the features over time. Feature1 Feature2 Feature1 Feature2 Feature3 20 20 40 40 60 60 Feature4 80 100 Feature4 Feature5 Feature6 Time Feature3 Time 80 100 120 120 140 140 160 160 180 200 180 1 2 3 200 4 Feature (a) 1 2 3 4 5 6 Feature (b) (c) (d) Figure 4: (a) True features, (b) True features occurences, (c) MAP estimate ZM AP and (d) associated E[Y|ZM AP ] t=20 t=50 t=20 t=50 t=100 t=200 t=100 t=200 (a) (b) Figure 5: (a) E[Xt |πt , Y] and (b) E[Xt |X1:t−1 ] at t = 20, 50, 100, 200. 5 Related work and Discussion The fixed-lag version of the time-varying DP of Caron et al. [1] is a special case of the proposed model when G is given by Fig. 3. The bivariate DP of Walker and Muliere [13] is also a special case when G has only two cliques. In this paper, we have assumed that the structure of the graph was known beforehand and we have shown that many flexible models arise from this framework. It would be interesting in the future to investigate the case where the graphical structure is unknown and must be estimated from the data. Acknowledgment The authors thank the reviewers for their comments that helped to improve the writing of the paper. 8 References [1] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In Uncertainty in Artificial Intelligence, 2007. [2] A.P. Dawid and S.L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21:1272–1317, 1993. [3] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995. [4] P. Fearnhead. Particle filters for mixture models with an unknown number of components. Statistics and Computing, 14:11–21, 2004. [5] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 2006. [6] D. Heinz. Building hyper dirichlet processes for graphical models. Electonic Journal of Statistics, 3:290–315, 2009. [7] J.F.C. Kingman. Random partitions in population genetics. Proceedings of the Royal Society of London, 361:1–20, 1978. [8] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [9] P. M¨ ller, F. Quintana, and G. Rosner. A method for combining inference across related nonu parametric Bayesian models. Journal of the Royal Statistical Society B, 66:735–749, 2004. [10] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [11] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability theory and related fields, 102:145–158, 1995. [12] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581, 2006. [13] S. Walker and P. Muliere. A bivariate Dirichlet process. Statistics and Probability Letters, 64:1–7, 2003. [14] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems, 2007. 9

6 0.52937728 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

7 0.51802188 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

8 0.49531025 59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations

9 0.48454785 226 nips-2009-Spatial Normalized Gamma Processes

10 0.35804838 192 nips-2009-Posterior vs Parameter Sparsity in Latent Variable Models

11 0.33244285 171 nips-2009-Nonparametric Bayesian Models for Unsupervised Event Coreference Resolution

12 0.31267774 50 nips-2009-Canonical Time Warping for Alignment of Human Behavior

13 0.30752 167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

14 0.29708877 65 nips-2009-Decoupling Sparsity and Smoothness in the Discrete Hierarchical Dirichlet Process

15 0.29378036 247 nips-2009-Time-rescaling methods for the estimation and assessment of non-Poisson neural encoding models

16 0.28277484 4 nips-2009-A Bayesian Analysis of Dynamics in Free Recall

17 0.28087968 143 nips-2009-Localizing Bugs in Program Executions with Graphical Models

18 0.27931735 233 nips-2009-Streaming Pointwise Mutual Information

19 0.26470062 205 nips-2009-Rethinking LDA: Why Priors Matter

20 0.26043981 136 nips-2009-Learning to Rank by Optimizing NDCG Measure


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.01), (24, 0.042), (25, 0.048), (35, 0.049), (36, 0.054), (39, 0.067), (58, 0.057), (61, 0.019), (66, 0.39), (71, 0.067), (81, 0.019), (86, 0.059), (91, 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78912896 114 nips-2009-Indian Buffet Processes with Power-law Behavior

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

2 0.66444719 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

Author: Long Zhu, Yuanahao Chen, Bill Freeman, Antonio Torralba

Abstract: We present a nonparametric Bayesian method for texture learning and synthesis. A texture image is represented by a 2D Hidden Markov Model (2DHMM) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes their spatial layout (the compatibility between adjacent textons). The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grow as the input texture becomes irregular. The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. The HDP-2DHMM results in a compact representation of textures which allows fast texture synthesis with comparable rendering quality over the state-of-the-art patch-based rendering methods. We also show that the HDP2DHMM can be applied to perform image segmentation and synthesis. The preliminary results suggest that HDP-2DHMM is generally useful for further applications in low-level vision problems. 1

3 0.64141029 101 nips-2009-Generalization Errors and Learning Curves for Regression with Multi-task Gaussian Processes

Author: Kian M. Chai

Abstract: We provide some insights into how task correlations in multi-task Gaussian process (GP) regression affect the generalization error and the learning curve. We analyze the asymmetric two-tasks case, where a secondary task is to help the learning of a primary task. Within this setting, we give bounds on the generalization error and the learning curve of the primary task. Our approach admits intuitive understandings of the multi-task GP by relating it to single-task GPs. For the case of one-dimensional input-space under optimal sampling with data only for the secondary task, the limitations of multi-task GP can be quantified explicitly. 1

4 0.60580301 187 nips-2009-Particle-based Variational Inference for Continuous Systems

Author: Andrew Frank, Padhraic Smyth, Alexander T. Ihler

Abstract: Since the development of loopy belief propagation, there has been considerable work on advancing the state of the art for approximate inference over distributions defined on discrete random variables. Improvements include guarantees of convergence, approximations that are provably more accurate, and bounds on the results of exact inference. However, extending these methods to continuous-valued systems has lagged behind. While several methods have been developed to use belief propagation on systems with continuous values, recent advances for discrete variables have not as yet been incorporated. In this context we extend a recently proposed particle-based belief propagation algorithm to provide a general framework for adapting discrete message-passing algorithms to inference in continuous systems. The resulting algorithms behave similarly to their purely discrete counterparts, extending the benefits of these more advanced inference techniques to the continuous domain. 1

5 0.51664001 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

Author: Lei Shi, Thomas L. Griffiths

Abstract: The goal of perception is to infer the hidden states in the hierarchical process by which sensory data are generated. Human behavior is consistent with the optimal statistical solution to this problem in many tasks, including cue combination and orientation detection. Understanding the neural mechanisms underlying this behavior is of particular importance, since probabilistic computations are notoriously challenging. Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus. This mechanism is based on a Monte Carlo method known as importance sampling, commonly used in computer science and statistics. Moreover, a simple extension to recursive importance sampling can be used to perform hierarchical Bayesian inference. We identify a scheme for implementing importance sampling with spiking neurons, and show that this scheme can account for human behavior in cue combination and the oblique effect. 1

6 0.41996163 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

7 0.41411394 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

8 0.41248053 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

9 0.41181636 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

10 0.40941399 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

11 0.40881374 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

12 0.39476821 215 nips-2009-Sensitivity analysis in HMMs with application to likelihood maximization

13 0.36765748 205 nips-2009-Rethinking LDA: Why Priors Matter

14 0.3643693 155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization

15 0.35859728 112 nips-2009-Human Rademacher Complexity

16 0.35840878 113 nips-2009-Improving Existing Fault Recovery Policies

17 0.35778397 226 nips-2009-Spatial Normalized Gamma Processes

18 0.35697821 204 nips-2009-Replicated Softmax: an Undirected Topic Model

19 0.35667121 154 nips-2009-Modeling the spacing effect in sequential category learning

20 0.3560479 145 nips-2009-Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability