nips nips2009 nips2009-217 knowledge-graph by maker-knowledge-mining

217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes


Source: pdf

Author: Alan S. Willsky, Erik B. Sudderth, Michael I. Jordan, Emily B. Fox

Abstract: We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. The size of the set and the sharing pattern are both inferred from data. We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. [sent-12, score-0.883]

2 We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. [sent-14, score-0.376]

3 In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. [sent-15, score-0.552]

4 We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data. [sent-16, score-0.173]

5 1 Introduction In many applications, one would like to discover and model dynamical behaviors which are shared among several related time series. [sent-17, score-0.58]

6 For example, consider video or motion capture data depicting multiple people performing a number of related tasks. [sent-18, score-0.173]

7 We specifically focus on time series where behaviors can be individually modeled via temporally independent or linear dynamical systems, and where transitions between behaviors are approximately Markovian. [sent-20, score-0.802]

8 Examples of such Markov jump processes include the hidden Markov model (HMM), switching vector autoregressive (VAR) process, and switching linear dynamical system (SLDS). [sent-21, score-0.622]

9 Our approach envisions a large library of behaviors, and each time series or object exhibits a subset of these behaviors. [sent-23, score-0.181]

10 We then seek a framework for discovering the set of dynamic behaviors that each object exhibits. [sent-24, score-0.444]

11 One can represent the set of behaviors an object exhibits via an associated list of features. [sent-26, score-0.404]

12 Setting fik = 1 implies that object i exhibits feature k. [sent-28, score-0.593]

13 Our desiderata motivate a Bayesian nonparametric approach based on the beta process [10, 22], allowing for infinitely many 1 potential features. [sent-29, score-0.326]

14 Integrating over the latent beta process induces a predictive distribution on features known as the Indian buffet process (IBP) [9]. [sent-30, score-0.561]

15 These models are quite different from our framework: the HDP-HMM does not select a subset of behaviors for a given time series, but assumes that all time series share the same set of behaviors and switch among them in exactly the same manner. [sent-33, score-0.593]

16 Our work focuses on modeling multiple time series and on capturing dynamical modes that are shared among the series. [sent-35, score-0.362]

17 In particular, we exploit the finite dynamical system induced by a fixed set of features to efficiently compute acceptance probabilities, and reversible jump birth and death proposals to explore new features. [sent-37, score-0.708]

18 We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising unsupervised segmentation of data from the CMU motion capture database [23]. [sent-38, score-0.173]

19 2 Binary Features and Beta Processes The beta process is a completely random measure [12]: draws are discrete with probability one, and realizations on disjoint sets are independent random variables. [sent-39, score-0.291]

20 (1) Here, c > 0 is a concentration parameter; we denote such a beta process by BP(c, B0 ). [sent-42, score-0.291]

21 The beta process is conjugate to a class of Bernoulli processes [22], denoted by BeP(B), which provide our sought-for featural representation. [sent-48, score-0.379]

22 A realization Xi ∼ BeP(B), with B an atomic measure, is a collection of unit mass atoms on Θ located at some subset of the atoms in B. [sent-49, score-0.154]

23 In particular, fik ∼ Bernoulli(ωk ) is sampled independently for each atom θk in Eq. [sent-50, score-0.395]

24 A Bernoulli process realization Xi then determines the subset of features allocated to object i: B | B0 , c ∼ BP(c, B0 ) Xi | B ∼ BeP(B), i = 1, . [sent-53, score-0.224]

25 (3) Because beta process priors are conjugate to the Bernoulli process [22], the posterior distribution given N samples Xi ∼ BeP(B) is a beta process with updated parameters: B | X1 , . [sent-57, score-0.749]

26 c+N k (4) Here, mk denotes the number of objects Xi which select the k th feature θk . [sent-61, score-0.18]

27 For simplicity, we have reordered the feature indices to list the K+ features used by at least one object first. [sent-62, score-0.212]

28 Computationally, Bernoulli process realizations Xi are often summarized by an infinite vector of binary indicator variables fi = [fi1 , fi2 , . [sent-63, score-0.256]

29 ], where fik = 1 if and only if object i exhibits feature k. [sent-66, score-0.593]

30 As shown by Thibaux and Jordan [22], marginalizing over the beta process measure B, and taking c = 1, provides a predictive distribution on indicators known as the Indian buffet process (IBP) Griffiths and Ghahramani [9]. [sent-67, score-0.492]

31 The Indian buffet consists of an infinitely long buffet line of dishes, or features. [sent-69, score-0.17]

32 2 3 Describing Multiple Time Series with Beta Processes Assume we have a set of N objects, each of whose dynamics is described by a switching vector autoregressive (VAR) process, with switches occurring according to a discrete-time Markov process. [sent-72, score-0.188]

33 Let yt represent the observation vector of the ith object at time t, and zt the latent dynamical mode. [sent-74, score-0.73]

34 Assuming an order r switching VAR process, denoted by VAR(r), we have (i) zt ∼ π (i) (5) (i) zt−1 r (i) (i) (i) (i) (i) (i) (6) t t j=1 (i) ˜ Az(i) yt + et (zt ), Aj,z(i) yt−j + et (zt ) yt = (i)T (i)T T (i) (i) ˜ where et (k) ∼ N (0, Σk ), Ak = [A1,k . [sent-75, score-0.69]

35 We refer to these VAR processes, with parameters θk = {Ak , Σk }, as behaviors, and use a beta process prior to couple the dynamic behaviors exhibited by different objects or sequences. [sent-83, score-0.705]

36 2, let fi be a vector of binary indicator variables, where fik denotes whether object i exhibits behavior k for some t ∈ {1, . [sent-85, score-0.705]

37 Given fi , we define a feature-constrained transition (i) distribution π (i) = {πk }, which governs the ith object’s Markov transitions among its set of dynamic behaviors. [sent-89, score-0.334]

38 We denote this collection of transition variables by η (i) , and use them to define object-specific, feature-constrained transition distributions: (i) (i) πj = ηj1 (i) ηj2 . [sent-91, score-0.222]

39 This construction defines πj over the full set of positive integers, but assigns positive mass only at indices k where fik = 1. [sent-96, score-0.437]

40 (i) The preceding generative process can be equivalently represented via a sample πj ˜ Dirichlet distribution of dimension Ki = k fik , containing the non-zero entries of (i) πj | fi , γ, κ ∼ Dir([γ, . [sent-97, score-0.611]

41 ˜ from a finite (i) πj : (9) (i) The κ hyperparameter places extra expected mass on the component of πj corresponding to a self˜ (i) transition πjj , analogously to the sticky hyperparameter of Fox et al. [sent-104, score-0.231]

42 4 MCMC Methods for Posterior Inference We have developed an MCMC method which alternates between resampling binary feature assignments given observations and dynamical parameters, and dynamical parameters given observations and features. [sent-108, score-0.642]

43 We leverage the fact that fixed feature assignments instantiate a set of finite AR-HMMs, for which dynamic programming can be used to efficiently compute marginal likelihoods. [sent-110, score-0.151]

44 1 Sampling binary feature assignments −i Let F −ik denote the set of all binary feature indicators excluding fik , and K+ be the number of behaviors currently instantiated by objects other than i. [sent-113, score-0.974]

45 The beta process distributed measure B | B0 ∼ BP(1, B0 ) is represented by its masses ωk and locations θk , as in Eq. [sent-121, score-0.291]

46 The features are then conditionally independent draws fik | ωk ∼ Bernoulli(ωk ), and are used to define feature-constrained transition distributions (i) πj | fi , γ, κ ∼ Dir([γ, . [sent-123, score-0.703]

47 Given the ith object’s observation sequence y1:Ti , (i) transition variables η (i) = η1:K −i ,1:K −i , and shared dynamic parameters θ1:K −i , feature indicators + + + −i fik for currently used features k ∈ {1, . [sent-136, score-0.817]

48 , K+ } have the following posterior distribution: (i) (i) p(fik | F −ik, y1:Ti , η (i), θ1:K −i , α) ∝ p(fik | F −ik, α)p(y1:Ti | fi , η(i), θ1:K −i ). [sent-139, score-0.187]

49 + + (10) −i Here, the IBP prior implies that p(fik = 1 | F −ik, α) = mk /N , where m−i denotes the number of k objects other than object i that exhibit behavior k. [sent-140, score-0.201]

50 In evaluating this expression, we have exploited the exchangeability of the IBP [9], which follows directly from the beta process construction [22]. [sent-141, score-0.291]

51 To update fik given F −ik, we thus use the posterior ¯ of Eq. [sent-143, score-0.444]

52 (10) to evaluate a MH proposal which flips fik to the complement f of its current value f : ¯ ¯ ¯ fik ∼ ρ(f | f )δ(fik , f ) + (1 − ρ(f | f ))δ(fik , f ) (i) ¯ ρ(f | f ) = min ¯ p(fik = f | F −ik, y1:Ti , η (i), θ1:K −i , α) + (i) p(fik = f | F −ik, y1:Ti , η (i), θ1:K −i , α) ,1 . [sent-144, score-0.835]

53 (11) + To compute likelihoods, we combine fi and η (i) to construct feature-constrained transition distribu(i) tions πj as in Eq. [sent-145, score-0.249]

54 An alternative approach is needed to resample the Poisson(α/N ) “unique” features associated only −i with object i. [sent-147, score-0.146]

55 Let K+ = K+ + ni , where ni is the number of features unique to object i, and define f−i = fi,1:K −i and f+i = fi,K −i +1:K+ . [sent-148, score-0.396]

56 The posterior distribution over ni is then given by + p(ni | + (i) fi , y1:Ti , η (i), θ1:K −i , α) + α ( α )ni e− N ∝ N ni ! [sent-149, score-0.397]

57 (i) p(y1:Ti | f−i , f+i = 1, η(i), η + , θ1:K −i , θ+ ) dB0 (θ + )dH(η + ), + (12) where H is the gamma prior on transition variables, θ + = θK −i +1:K+ are the parameters of unique + (i) −i features, and η + are transition parameters ηjk to or from unique features j, k ∈ {K+ + 1 : K+ }. [sent-150, score-0.467]

58 [15] instead consider independent Metropolis proposals which replace the existing unique features by n′ ∼ Poisson(α/N ) new features, with corresponding parameters i θ′ drawn from the prior. [sent-154, score-0.185]

59 For high-dimensional models like that considered in this paper, however, + moves proposing large numbers of unique features have low acceptance rates. [sent-155, score-0.17]

60 Thus, mixing rates are greatly affected by the beta process hyperparameter α. [sent-156, score-0.33]

61 We instead develop a “birth and death” reversible jump MCMC (RJMCMC) sampler [8], which proposes to either add a single new feature, 4 or eliminate one of the existing features in f+i . [sent-157, score-0.192]

62 The feature proposal qf (· | ·) encodes the probabilities of birth and death moves: a new feature is created with probability 0. [sent-161, score-0.433]

63 5, and each of the ni existing features is deleted with probability 0. [sent-162, score-0.164]

64 For parameters, we define our proposal using the generative model: n i ′ ′ b0 (θ+,ni +1 ) k=1 δθ+k (θ+k ), ′ δθ+k (θ+k ), k=ℓ ′ qθ (θ′ | f+i , f+i , θ+ ) = + birth of feature ni + 1; death of feature ℓ, (14) where b0 is the density associated with α−1 B0 . [sent-164, score-0.469]

65 The distribution qη (· | ·) is defined similarly, but using the gamma prior on transition variables of Eq. [sent-165, score-0.217]

66 Because our birth and death proposals do not modify the values of existing i parameters, the Jacobian term normally arising in RJMCMC algorithms simply equals one. [sent-169, score-0.273]

67 2 Sampling dynamic parameters and transition variables Posterior updates to transition variables η (i) and shared dynamic parameters θk are greatly simpli(i) fied if we instantiate the mode sequences z1:Ti for each object i. [sent-171, score-0.645]

68 We treat these mode sequences as auxiliary variables: they are sampled given the current MCMC state, conditioned on when resampling model parameters, and then discarded for subsequent updates of feature assignments fi . [sent-172, score-0.353]

69 ˜ zt−1 t (17) t (i) (i) Because Dirichlet priors are conjugate to multinomial observations z1:T , the posterior of πj is (i) (i) (i) (i) (i) (i) πj | fi , z1:T , γ, κ ∼ Dir([γ + nj1 , . [sent-174, score-0.263]

70 Since the mode sequence z1:T is (i) generated from feature-constrained transition distributions, njk is zero for any k such that fik = 0. [sent-182, score-0.614]

71 5 (a) (b) Figure 2: (a) Observation sequences for each of 5 switching AR(1) time series colored by true mode sequence, and offset for clarity. [sent-202, score-0.279]

72 (b) True feature matrix (top) of the five objects and estimated feature matrix (bottom) averaged over 10,000 MCMC samples taken from 100 trials every 10th sample. [sent-203, score-0.189]

73 The estimated feature matrices are produced from mode sequences mapped to the ground truth labels according to the minimum Hamming distance metric, and selecting modes with more than 2% of the object’s observations. [sent-205, score-0.21]

74 3 Sampling the beta process and Dirichlet transition hyperparameters We additionally place priors on the Dirichlet hyperparameters γ and κ, as well as the beta process parameter α. [sent-207, score-0.693]

75 Each sub-step uses a gamma proposal distribution q(· | ·) with 2 2 fixed variance σγ or σκ , and mean equal to the current hyperparameter value. [sent-214, score-0.19]

76 5 Synthetic Experiments To test the ability of BP-AR-HMM to discover shared dynamics, we generated five time series that switched between AR(1) models (i) (i) (i) (i) (22) yt = az(i) yt−1 + et (zt ) t with ak ∈ {−0. [sent-218, score-0.443]

77 Comparing to the true feature matrix, we see that our model is indeed able to discover most of the underlying latent structure of the time series despite the challenging setting defined by the close AR coefficients. [sent-233, score-0.199]

78 One might propose, as an alternative to the BP-AR-HMM, using an architecture based on the hierarchical Dirichlet process of [21]; specifically we could use the HDP-AR-HMMs of [5] tied together with a shared set of transition and dynamic parameters. [sent-234, score-0.337]

79 The first two objects, with four times the data points of the third, switched between dynamical modes defined 6 Normalized Hamming Distance Normalized Hamming Distance 0. [sent-236, score-0.25]

80 (c)-(d) Examples of typical segmentations into behavior modes for the three objects at Gibbs iteration 1000 for the two models (top = estimate, bottom = truth). [sent-249, score-0.141]

81 3 indicate that the multiple HDP-AR-HMM model typically describes the third object using ak ∈ {−0. [sent-261, score-0.205]

82 These results reiterate that the feature model emphasizes choosing behaviors rather than assuming all objects are performing minor variations of the same dynamics. [sent-264, score-0.395]

83 The gamma proposals used σγ = 1 and σκ = 100 while the MNIW prior was given M = 0, K = 0. [sent-266, score-0.192]

84 At initialization, each time series was segmented into five contiguous blocks, with feature labels unique to that sequence. [sent-269, score-0.155]

85 6 Motion Capture Experiments The linear dynamical system is a common model for describing simple human motion [11], and the more complicated SLDS has been successfully applied to the problem of human motion synthesis, classification, and visual tracking [17, 18]. [sent-270, score-0.475]

86 Other approaches develop non-linear dynamical models using Gaussian processes [25] or based on a collection of binary latent features [20]. [sent-271, score-0.404]

87 However, there has been little effort in jointly segmenting and identifying common dynamic behaviors amongst a set of multiple motion capture (MoCap) recordings of people performing various tasks. [sent-272, score-0.53]

88 (b) Hamming distance versus number of GMM clusters / HMM states on raw observations (blue/green) and first-difference observations (red/cyan), with the BP- and HDP- AR-HMM segmentations (black) and true feature count (magenta) shown for comparison. [sent-283, score-0.181]

89 Each of these routines used some combination of the following motion categories: running in place, jumping jacks, arm circles, side twists, knee raises, squats, punching, up and down, two variants of toe touches, arch over, and a reach out stretch. [sent-286, score-0.17]

90 Although some behaviors are merged or split, the overall performance shows a clear ability to find common motions. [sent-295, score-0.272]

91 The split behaviors shown in green and yellow can be attributed to the two subjects performing the same motion in a distinct manner (e. [sent-296, score-0.405]

92 , knee raises in combination with upper body motion or not, running with hands in or out of sync with knees, etc. [sent-298, score-0.17]

93 5(a), we additionally see that the BP-AR-HMM provides a superior ability to discover the shared feature structure. [sent-308, score-0.165]

94 7 Discussion Utilizing the beta process, we developed a coherent Bayesian nonparametric framework for discovering dynamical features common to multiple time series. [sent-309, score-0.516]

95 This formulation allows for objectspecific variability in how the dynamical behaviors are used. [sent-310, score-0.481]

96 We additionally developed a novel exact sampling algorithm for non-conjugate beta process models. [sent-311, score-0.291]

97 Although we focused on switching VAR processes, our approach could be equally well applied to a wide range of other switching dynamical systems. [sent-313, score-0.463]

98 Infinite latent feature models and the Indian buffet process. [sent-395, score-0.199]

99 Nonparametric Bayes estimators based on beta processes in models for life history data. [sent-400, score-0.261]

100 A dynamic Bayesian network approach to figure c tracking using learned dynamic models. [sent-453, score-0.17]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('fik', 0.395), ('behaviors', 0.272), ('beta', 0.213), ('dynamical', 0.209), ('zt', 0.209), ('yt', 0.177), ('syy', 0.171), ('hmm', 0.166), ('ti', 0.143), ('fi', 0.138), ('motion', 0.133), ('mocap', 0.128), ('switching', 0.127), ('ak', 0.118), ('ibp', 0.116), ('transition', 0.111), ('slds', 0.107), ('gamma', 0.106), ('ni', 0.105), ('hamming', 0.097), ('birth', 0.096), ('death', 0.091), ('ik', 0.087), ('mh', 0.087), ('gmm', 0.087), ('object', 0.087), ('proposals', 0.086), ('bep', 0.086), ('buffet', 0.085), ('dynamic', 0.085), ('process', 0.078), ('fox', 0.076), ('ki', 0.075), ('var', 0.072), ('bp', 0.071), ('acceptance', 0.071), ('mcmc', 0.07), ('qf', 0.069), ('jk', 0.067), ('feature', 0.066), ('indian', 0.066), ('barbi', 0.064), ('iw', 0.064), ('mniw', 0.064), ('njj', 0.064), ('ar', 0.063), ('shared', 0.063), ('poisson', 0.061), ('angles', 0.061), ('sy', 0.061), ('autoregressive', 0.061), ('features', 0.059), ('mode', 0.057), ('mk', 0.057), ('objects', 0.057), ('rjmcmc', 0.056), ('atoms', 0.056), ('njk', 0.051), ('az', 0.051), ('jump', 0.05), ('bernoulli', 0.049), ('posterior', 0.049), ('sudderth', 0.049), ('series', 0.049), ('latent', 0.048), ('processes', 0.048), ('resampling', 0.046), ('sequences', 0.046), ('reversible', 0.046), ('markov', 0.045), ('exhibits', 0.045), ('proposal', 0.045), ('dirichlet', 0.045), ('segmentations', 0.043), ('pavlovi', 0.043), ('rehg', 0.043), ('mass', 0.042), ('modes', 0.041), ('jordan', 0.041), ('volume', 0.041), ('gibbs', 0.041), ('unique', 0.04), ('binary', 0.04), ('conjugate', 0.04), ('capture', 0.04), ('dir', 0.039), ('hyperparameter', 0.039), ('toolbox', 0.038), ('indicators', 0.038), ('skeleton', 0.037), ('cmu', 0.037), ('emissions', 0.037), ('diff', 0.037), ('knee', 0.037), ('sampler', 0.037), ('observations', 0.036), ('discover', 0.036), ('nite', 0.035), ('nonparametric', 0.035), ('willsky', 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000012 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

Author: Alan S. Willsky, Erik B. Sudderth, Michael I. Jordan, Emily B. Fox

Abstract: We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. The size of the set and the sharing pattern are both inferred from data. We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data.

2 0.23288937 136 nips-2009-Learning to Rank by Optimizing NDCG Measure

Author: Hamed Valizadegan, Rong Jin, Ruofei Zhang, Jianchang Mao

Abstract: Learning to rank is a relatively new field of study, aiming to learn a ranking function from a set of training data with relevancy labels. The ranking algorithms are often evaluated using information retrieval measures, such as Normalized Discounted Cumulative Gain (NDCG) [1] and Mean Average Precision (MAP) [2]. Until recently, most learning to rank algorithms were not using a loss function related to the above mentioned evaluation measures. The main difficulty in direct optimization of these measures is that they depend on the ranks of documents, not the numerical values output by the ranking function. We propose a probabilistic framework that addresses this challenge by optimizing the expectation of NDCG over all the possible permutations of documents. A relaxation strategy is used to approximate the average of NDCG over the space of permutation, and a bound optimization approach is proposed to make the computation efficient. Extensive experiments show that the proposed algorithm outperforms state-of-the-art ranking algorithms on several benchmark data sets. 1

3 0.19367459 22 nips-2009-Accelerated Gradient Methods for Stochastic Optimization and Online Learning

Author: Chonghai Hu, Weike Pan, James T. Kwok

Abstract: Regularized risk minimization often involves non-smooth optimization, either because of the loss function (e.g., hinge loss) or the regularizer (e.g., ℓ1 -regularizer). Gradient methods, though highly scalable and easy to implement, are known to converge slowly. In this paper, we develop a novel accelerated gradient method for stochastic optimization while still preserving their computational simplicity and scalability. The proposed algorithm, called SAGE (Stochastic Accelerated GradiEnt), exhibits fast convergence rates on stochastic composite optimization with convex or strongly convex objectives. Experimental results show that SAGE is faster than recent (sub)gradient methods including FOLOS, SMIDAS and SCD. Moreover, SAGE can also be extended for online learning, resulting in a simple algorithm but with the best regret bounds currently known for these problems. 1

4 0.17466025 114 nips-2009-Indian Buffet Processes with Power-law Behavior

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

5 0.16417883 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

Author: Finale Doshi-velez, Shakir Mohamed, Zoubin Ghahramani, David A. Knowles

Abstract: Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models, scales to datasets orders of magnitude larger than have previously been possible. 1

6 0.1391228 226 nips-2009-Spatial Normalized Gamma Processes

7 0.13806406 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

8 0.13117281 257 nips-2009-White Functionals for Anomaly Detection in Dynamical Systems

9 0.11851501 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

10 0.11428154 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

11 0.10636326 167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

12 0.10186651 57 nips-2009-Conditional Random Fields with High-Order Features for Sequence Labeling

13 0.098870389 133 nips-2009-Learning models of object structure

14 0.096091881 225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes

15 0.094925173 88 nips-2009-Extending Phase Mechanism to Differential Motion Opponency for Motion Pop-out

16 0.088281676 100 nips-2009-Gaussian process regression with Student-t likelihood

17 0.082192212 61 nips-2009-Convex Relaxation of Mixture Regression with Efficient Algorithms

18 0.082158454 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

19 0.081293501 201 nips-2009-Region-based Segmentation and Object Detection

20 0.07798367 188 nips-2009-Perceptual Multistability as Markov Chain Monte Carlo Inference


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.24), (1, -0.056), (2, -0.007), (3, -0.18), (4, 0.179), (5, -0.128), (6, 0.078), (7, 0.015), (8, 0.027), (9, -0.136), (10, 0.128), (11, 0.051), (12, -0.075), (13, -0.096), (14, -0.243), (15, -0.12), (16, 0.074), (17, -0.117), (18, 0.054), (19, 0.078), (20, 0.03), (21, 0.094), (22, -0.064), (23, -0.033), (24, 0.104), (25, -0.052), (26, -0.052), (27, 0.073), (28, -0.014), (29, 0.108), (30, -0.112), (31, -0.062), (32, -0.095), (33, 0.089), (34, -0.034), (35, -0.116), (36, 0.072), (37, 0.072), (38, -0.065), (39, 0.025), (40, -0.031), (41, -0.136), (42, -0.029), (43, -0.017), (44, 0.099), (45, -0.061), (46, -0.035), (47, 0.127), (48, 0.032), (49, 0.003)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95009834 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

Author: Alan S. Willsky, Erik B. Sudderth, Michael I. Jordan, Emily B. Fox

Abstract: We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. The size of the set and the sharing pattern are both inferred from data. We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data.

2 0.71434981 114 nips-2009-Indian Buffet Processes with Power-law Behavior

Author: Yee W. Teh, Dilan Gorur

Abstract: The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose a three-parameter generalization of the IBP exhibiting power-law behavior. We achieve this by generalizing the beta process (the de Finetti measure of the IBP) to the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting power-law properties). We derive a stick-breaking construction for the stable-beta process, and find that our power-law IBP is a good model for word occurrences in document corpora. 1

3 0.59102023 29 nips-2009-An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism

Author: Douglas Eck, Yoshua Bengio, Aaron C. Courville

Abstract: The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent factor model framework to two or more unbounded layers of latent factors. From a generative perspective, each layer defines a conditional factorial prior distribution over the binary latent variables of the layer below via a noisy-or mechanism. We explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment. 1

4 0.58675325 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

Author: Finale Doshi-velez, Shakir Mohamed, Zoubin Ghahramani, David A. Knowles

Abstract: Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models, scales to datasets orders of magnitude larger than have previously been possible. 1

5 0.57130718 226 nips-2009-Spatial Normalized Gamma Processes

Author: Vinayak Rao, Yee W. Teh

Abstract: Dependent Dirichlet processes (DPs) are dependent sets of random measures, each being marginally DP distributed. They are used in Bayesian nonparametric models when the usual exchangeability assumption does not hold. We propose a simple and general framework to construct dependent DPs by marginalizing and normalizing a single gamma process over an extended space. The result is a set of DPs, each associated with a point in a space such that neighbouring DPs are more dependent. We describe Markov chain Monte Carlo inference involving Gibbs sampling and three different Metropolis-Hastings proposals to speed up convergence. We report an empirical study of convergence on a synthetic dataset and demonstrate an application of the model to topic modeling through time. 1

6 0.49138558 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

7 0.47581297 22 nips-2009-Accelerated Gradient Methods for Stochastic Optimization and Online Learning

8 0.45818073 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

9 0.45041016 257 nips-2009-White Functionals for Anomaly Detection in Dynamical Systems

10 0.43589827 197 nips-2009-Randomized Pruning: Efficiently Calculating Expectations in Large Dynamic Programs

11 0.43093517 171 nips-2009-Nonparametric Bayesian Models for Unsupervised Event Coreference Resolution

12 0.4275265 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

13 0.40781406 192 nips-2009-Posterior vs Parameter Sparsity in Latent Variable Models

14 0.40594372 59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations

15 0.40207246 57 nips-2009-Conditional Random Fields with High-Order Features for Sequence Labeling

16 0.39050484 150 nips-2009-Maximum likelihood trajectories for continuous-time Markov chains

17 0.38545111 56 nips-2009-Conditional Neural Fields

18 0.38090613 136 nips-2009-Learning to Rank by Optimizing NDCG Measure

19 0.36333081 51 nips-2009-Clustering sequence sets for motif discovery

20 0.35646802 132 nips-2009-Learning in Markov Random Fields using Tempered Transitions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.026), (24, 0.027), (25, 0.068), (35, 0.057), (36, 0.138), (39, 0.061), (57, 0.01), (58, 0.095), (61, 0.02), (64, 0.013), (66, 0.049), (71, 0.087), (73, 0.2), (81, 0.013), (86, 0.052), (91, 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.84966499 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

Author: Alan S. Willsky, Erik B. Sudderth, Michael I. Jordan, Emily B. Fox

Abstract: We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time series. The size of the set and the sharing pattern are both inferred from data. We develop an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process. In particular, our approach uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising results on unsupervised segmentation of visual motion capture data.

2 0.84007317 68 nips-2009-Dirichlet-Bernoulli Alignment: A Generative Model for Multi-Class Multi-Label Multi-Instance Corpora

Author: Shuang-hong Yang, Hongyuan Zha, Bao-gang Hu

Abstract: We propose Dirichlet-Bernoulli Alignment (DBA), a generative model for corpora in which each pattern (e.g., a document) contains a set of instances (e.g., paragraphs in the document) and belongs to multiple classes. By casting predefined classes as latent Dirichlet variables (i.e., instance level labels), and modeling the multi-label of each pattern as Bernoulli variables conditioned on the weighted empirical average of topic assignments, DBA automatically aligns the latent topics discovered from data to human-defined classes. DBA is useful for both pattern classification and instance disambiguation, which are tested on text classification and named entity disambiguation in web search queries respectively.

3 0.7260204 187 nips-2009-Particle-based Variational Inference for Continuous Systems

Author: Andrew Frank, Padhraic Smyth, Alexander T. Ihler

Abstract: Since the development of loopy belief propagation, there has been considerable work on advancing the state of the art for approximate inference over distributions defined on discrete random variables. Improvements include guarantees of convergence, approximations that are provably more accurate, and bounds on the results of exact inference. However, extending these methods to continuous-valued systems has lagged behind. While several methods have been developed to use belief propagation on systems with continuous values, recent advances for discrete variables have not as yet been incorporated. In this context we extend a recently proposed particle-based belief propagation algorithm to provide a general framework for adapting discrete message-passing algorithms to inference in continuous systems. The resulting algorithms behave similarly to their purely discrete counterparts, extending the benefits of these more advanced inference techniques to the continuous domain. 1

4 0.72589499 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

Author: Lei Shi, Thomas L. Griffiths

Abstract: The goal of perception is to infer the hidden states in the hierarchical process by which sensory data are generated. Human behavior is consistent with the optimal statistical solution to this problem in many tasks, including cue combination and orientation detection. Understanding the neural mechanisms underlying this behavior is of particular importance, since probabilistic computations are notoriously challenging. Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus. This mechanism is based on a Monte Carlo method known as importance sampling, commonly used in computer science and statistics. Moreover, a simple extension to recursive importance sampling can be used to perform hierarchical Bayesian inference. We identify a scheme for implementing importance sampling with spiking neurons, and show that this scheme can account for human behavior in cue combination and the oblique effect. 1

5 0.72275651 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

Author: Francois Caron, Arnaud Doucet

Abstract: Over recent years Dirichlet processes and the associated Chinese restaurant process (CRP) have found many applications in clustering while the Indian buffet process (IBP) is increasingly used to describe latent feature models. These models are attractive because they ensure exchangeability (over samples). We propose here extensions of these models where the dependency between samples is given by a known decomposable graph. These models have appealing properties and can be easily learned using Monte Carlo techniques. 1 Motivation The CRP and IBP have found numerous applications in machine learning over recent years [5, 10]. We consider here the case where the data we are interested in are ‘locally’ dependent; these dependencies being represented by a known graph G where each data point/object is associated to a vertex. These local dependencies can correspond to any conceptual or real (e.g. space, time) metric. For example, in the context of clustering, we might want to propose a prior distribution on partitions enforcing that data which are ‘close’ in the graph are more likely to be in the same cluster. Similarly, in the context of latent feature models, we might be interested in a prior distribution on features enforcing that data which are ‘close’ in the graph are more likely to possess similar features. The ‘standard’ CRP and IBP correspond to the case where the graph G is complete; that is it is fully connected. In this paper, we generalize the CRP and IBP to decomposable graphs. The resulting generalized versions of the CRP and IBP enjoy attractive properties. Each clique of the graph follows marginally a CRP or an IBP process and explicit expressions for the joint prior distribution on the graph is available. It makes it easy to learn those models using straightforward generalizations of Markov chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC) algorithms proposed to perform inference for the CRP and IBP [5, 10, 14]. The rest of the paper is organized as follows. In Section 2, we review the popular Dirichlet multinomial allocation model and the Dirichlet Process (DP) partition distribution. We propose an extension of these two models to decomposable graphical models. In Section 3 we discuss nonparametric latent feature models, reviewing briefly the construction in [5] and extending it to decomposable graphs. We demonstrate these models in Section 4 on two applications: an alternative to the hierarchical DP model [12] and a time-varying matrix factorization problem. 2 Prior distributions for partitions on decomposable graphs Assume we have n observations. When performing clustering, we associate to each of this observation an allocation variable zi ∈ [K] = {1, . . . , K}. Let Πn be the partition of [n] = {1, . . . , n} defined by the equivalence relation i ↔ j ⇔ zi = zj . The resulting partition Πn = {A1 , . . . , An(Πn ) } 1 is an unordered collection of disjoint non-empty subsets Aj of [n], j = 1, . . . , n(Πn ), where ∪j Aj = [n] and n(Πn ) is the number of subsets for partition Πn . We also denote by Pn be the set of all partitions of [n] and let nj , j = 1, . . . , n(Πn ), be the size of the subset Aj . Each allocation variable zi is associated to a vertex/site of an undirected graph G, which is assumed to be known. In the standard case where the graph G is complete, we first review briefly here two popular prior distributions on z1:n , equivalently on Πn . We then extend these models to undirected decomposable graphs; see [2, 8] for an introduction to decomposable graphs. Finally we briefly discuss the directed case. Note that the models proposed here are completely different from the hyper multinomial-Dirichlet in [2] and its recent DP extension [6]. 2.1 Dirichlet multinomial allocation model and DP partition distribution Assume for the time being that K is finite. When the graph is complete, a popular choice for the allocation variables is to consider a Dirichlet multinomial allocation model [11] θ θ , . . . , ), zi |π ∼ π (1) K K where D is the standard Dirichlet distribution and θ > 0. Integrating out π, we obtain the following Dirichlet multinomial prior distribution π ∼ D( Pr(z1:n ) = K j=1 Γ(θ) Γ(nj + θ K) (2) θ Γ(θ + n)Γ( K )K and then, using the straightforward equality Pr(Πn ) = PK where PK = {Πn ∈ Pn |n(Πn ) ≤ K}, we obtain K! (K−n(Πn ))! Pr(z1:n ) valid for for all Πn ∈ n(Π ) Pr(Πn ) = θ Γ(θ) j=1n Γ(nj + K ) K! . θ (K − n(Πn ))! Γ(θ + n)Γ( K )n(Πn ) (3) DP may be seen as a generalization of the Dirichlet multinomial model when the number of components K → ∞; see for example [10]. In this case the distribution over the partition Πn of [n] is given by [11] n(Π ) θn(Πn ) j=1n Γ(nj ) . (4) Pr(Πn ) = n i=1 (θ + i − 1) Let Π−k = {A1,−k , . . . , An(Π−k ),−k } be the partition induced by removing item k to Πn and nj,−k be the size of cluster j for j = 1, . . . , n(Π−k ). It follows from (4) that an item k is assigned to an existing cluster j, j = 1, . . . , n(Π−k ), with probability proportional to nj,−k / (n − 1 + θ) and forms a new cluster with probability θ/ (n − 1 + θ). This property is the basis of the CRP. We now extend the Dirichlet multinomial allocation and the DP partition distribution models to decomposable graphs. 2.2 Markov combination of Dirichlet multinomial and DP partition distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. It can be easily checked that if the marginal distribution of zC for each clique C ∈ C is defined by (2) then these distributions are consistent as they yield the same distribution (2) over the separators. Therefore, the unique Markov distribution over G with Dirichlet multinomial distribution over the cliques is defined by [8] Pr(zC ) S∈S Pr(zS ) C∈C Pr(z1:n ) = (5) where for each complete set B ⊆ G, we have Pr(zB ) given by (2). It follows that we have for any Πn ∈ PK Γ(θ) K! Pr(Πn ) = (K − n(Πn ))! C∈C Γ(θ) S∈S 2 K j=1 θ Γ(nj,C + K ) θ Γ(θ+nC )Γ( K )K K j=1 θ Γ(nj,S + K ) θ Γ(θ+nS )Γ( K )K (6) where for each complete set B ⊆ G, nj,B is the number of items associated to cluster j, j = 1, . . . , K in B and nB is the total number of items in B. Within each complete set B, the allocation variables define a partition distributed according to the Dirichlet-multinomial distribution. We now extend this approach to DP partition distributions; that is we derive a joint distribution over Πn such that the distribution of ΠB over each complete set B of the graph is given by (4) with θ > 0. Such a distribution satisfies the consistency condition over the separators as the restriction of any partition distributed according to (4) still follows (4) [7]. G Proposition. Let Pn be the set of partitions Πn ∈ Pn such that for each decomposition A, B, and any (i, j) ∈ A × B, i ↔ j ⇒ ∃k ∈ A ∩ B such that k ↔ i ↔ j. As K → ∞, the prior distribution G over partitions (6) is given for each Πn ∈ Pn by Pr(Πn ) = θn(Πn ) n(ΠC ) Γ(nj,C ) j=1 nC i=1 (θ+i−1) n(ΠS ) Γ(nj,S ) j=1 nS (θ+i−1) i=1 C∈C S∈S (7) where n(ΠB ) is the number of clusters in the complete set B. Proof. From (6), we have θ n(ΠC ) K(K − 1) . . . (K − n(Πn ) + 1) Pr(Πn ) = K C∈C n(ΠC )− S∈S n(ΠS ) C∈C θ n(ΠS ) S∈S n(ΠC ) θ Γ(nj,C + K ) j=1 nC (θ+i−1) i=1 n(ΠS ) θ Γ(nj,S + K ) j=1 nS (θ+i−1) i=1 Thus when K → ∞, we obtain (7) if n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) and 0 otherwise. We have n(Πn ) ≤ C∈C n(ΠC ) − S∈S n(ΠS ) for any Πn ∈ Pn and the subset of Pn verifying G n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) corresponds to the set Pn . Example. Let the notation i ∼ j (resp. i j) indicates an edge (resp. no edge) between two sites. Let n = 3 and G be the decomposable graph defined by the relations 1 ∼ 2, 2 ∼ 3 and 1 3. G The set P3 is then equal to {{{1, 2, 3}}; {{1, 2}, {3}}; {{1}, {2, 3}}; {{1}, {2}, {3}}}. Note that G the partition {{1, 3}, {2}} does not belong to P3 . Indeed, as there is no edge between 1 and 3, they cannot be in the same cluster if 2 is in another cluster. The cliques are C1 = {1, 2} and C2 = {2, 3} Pr(ΠC1 ) Pr(ΠC2 ) hence we can and the separator is S2 = {2}. The distribution is given by Pr(Π3 ) = Pr(ΠS ) 2 check that we obtain Pr({1, 2, 3}) = (θ + 1)−2 , Pr({1, 2}, {3}) = Pr({1, 2}, {3}) = θ(θ + 1)−2 and Pr({1}, {2}, {3}) = θ2 (θ + 1)−2 . Let now define the full conditional distributions. Based on (7) the conditional assignment of an item k is proportional to the conditional over the cliques divided by the conditional over the separators. G Let denote G−k the undirected graph obtained by removing vertex k from G. Suppose that Πn ∈ Pn . G−k If Π−k ∈ Pn−1 , then do not change the value of item k. Otherwise, item k is assigned to cluster j / where j = 1, . . . , n(Π−k ) with probability proportional to {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (8) and to a new cluster with probability proportional to θ, where n−k,j,C is the number of items in the set C \ {k} belonging to cluster j. The updating process is illustrated by the Chinese wedding party process1 in Fig. 1. The results of this section can be extended to the Pitman-Yor process, and more generally to species sampling models. Example (continuing). Given Π−2 = {A1 = {1}, A2 = {3}}, we have −1 Pr( item 2 assigned to A1 = {1}| Π−2 ) = Pr( item 2 assigned to A2 = {3}| Π−2 ) = (θ + 2) −1 and Pr( item 2 assigned to new cluster A3 | Π−2 ) = θ (θ + 2) . Given Π−2 = {A1 = {1, 3}}, item 2 is assigned to A1 with probability 1. 1 Note that this representation describes the full conditionals while the CRP represents the sequential updat- ing. 3 (a) (b) (d) (c) (e) Figure 1: Chinese wedding party. Consider a group of n guests attending a wedding party. Each of the n guests may belong to one or several cliques, i.e. maximal groups of people such that everybody knows everybody. The belonging of each guest to the different cliques is represented by color patches on the figures, and the graphical representation of the relationship between the guests is represented by the graphical model (e). (a) Suppose that the guests are already seated such that two guests cannot be together at the same table is they are not part of the same clique, or if there does not exist a group of other guests such that they are related (“Any friend of yours is a friend of mine”). (b) The guest number k leaves his table and either (c) joins a table where there are guests from the same clique as him, with probability proportional to the product of the number of guests from each clique over the product of the number of guests belonging to several cliques on that table or (d) he joins a new table with probability proportional to θ. 2.3 Monte Carlo inference 2.3.1 MCMC algorithm Using the full conditionals, a single site Gibbs sampler can easily be designed to approximate the posterior distribution Pr(Πn |z1:n ). Given a partition Πn , an item k is taken out of the partition. If G−k Π−k ∈ Pn−1 , item k keeps the same value. Otherwise, the item will be assigned to a cluster j, / j = 1, . . . , n(Π−k ), with probability proportional to p(z{k}∪Aj,−k ) × p(zAj,−k ) {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (9) and the item will be assigned to a new cluster with probability proportional to p(z{k} ) × θ. Similarly to [3], we can also define a procedure to sample from p(θ|n(Πn ) = k)). We assume that θ ∼ G(a, b) and use p auxiliary variables x1 , . . . , xp . The procedure is as follows. • For j = 1, . . . , p, sample xj |k, θ ∼ Beta(θ + nSj , nCj − nSj ) • Sample θ|k, x1:p ∼ G(a + k, b − j log xj ) 2.3.2 Sequential Monte Carlo We have so far only treated the case of an undirected decomposable graph G. We can formulate a sequential updating rule for the corresponding perfect directed version D of G. Indeed, let (a1 , . . . a|V | ) be a perfect ordering and pa(ak ) be the set of parents of ak which is by definition complete. Let Πk−1 = {A1,k−1 , . . . , An(Πk−1 ),k−1 } denote the partition of the first k−1 vertices a1:k−1 and let nj,pa(ak ) be the number of elements with value j in the set pa(ak ), j = 1, . . . , n(Πk−1 ). Then the vertex ak joins the set j with probability nj,pa(ak ) / θ + cluster with probability θ/ θ + q q nq,pa(ak ) and creates a new nq,pa(ak ) . One can then design a particle filter/SMC method in a similar fashion as [4]. Consider a set of (i) (i) (i) (i) N N particles Πk−1 with weights wk−1 ∝ Pr(Πk−1 , z1:k−1 ) ( i=1 wk−1 = 1) that approximate (i) the posterior distribution Pr(Πk−1 |z1:k−1 ). For each particle i, there are n(Πk−1 ) + 1 possible 4 (i,j) allocations for component ak . We denote Πk the partition obtained by associating component ak (i,j) to cluster j. The weight associated to Πk is given by  nj,pa(ak ) (i)  if j = 1, . . . , n(Πk−1 ) θ+ q nq,pa(ak ) (i,j) (i) p(z{ak }∪Aj,k−1 ) wk−1 = wk−1 × (10) (i) θ  θ+ n p(zAj,k−1 ) if j = n(Πk−1 ) + 1 q q,pa(ak ) (i,j) Then we can perform a deterministic resampling step by keeping the N particles Πk with highest (i,j) (i) (i) weights wk−1 . Let Πk be the resampled particles and wk the associated normalized weights. 3 Prior distributions for infinite binary matrices on decomposable graphs Assume we have n objects; each of these objects being associated to the vertex of a graph G. To K each object is associated a K-dimensional binary vector zn = (zn,1 , . . . , zn,K ) ∈ {0, 1} where zn,i = 1 if object n possesses feature i and zn,i = 0 otherwise. These vectors zt form a binary n × K matrix denoted Z1:n . We denote by ξ1:n the associated equivalence class of left-ordered matrices and let EK be the set of left-ordered matrices with at most K features. In the standard case where the graph G is complete, we review briefly here two popular prior distributions on Z1:n , equivalently on ξ1:n : the Beta-Bernoulli model and the IBP [5]. We then extend these models to undirected decomposable graphs. This can be used for example to define a time-varying IBP as illustrated in Section 4. 3.1 Beta-Bernoulli and IBP distributions The Beta-Bernoulli distribution over the allocation Z1:n is K Pr(Z1:n ) = α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) α K Γ(nj j=1 (11) where nj is the number of objects having feature j. It follows that Pr(ξ1:n ) = K K! 2n −1 h=0 α K Γ(nj α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) Kh ! j=1 (12) where Kh is the number of features possessing the history h (see [5] for details). The nonparametric model is obtained by taking the limit when K → ∞ Pr(ξ1:n ) = αK K+ + 2n −1 h=1 Kh ! exp(−αHn ) where K + is the total number of features and Hn = 3.2 (n − nj )!(nj − 1)! n! j=1 n 1 k=1 k . (13) The IBP follows from (13). Markov combination of Beta-Bernoulli and IBP distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. As in the Dirichlet-multinomial case, it is easily seen that if for each clique C ∈ C, the marginal distribution is defined by (11), then these distributions are consistent as they yield the same distribution (11) over the separators. Therefore, the unique Markov distribution over G with Beta-Bernoulli distribution over the cliques is defined by [8] Pr(ZC ) S∈S Pr(ZS ) C∈C Pr(Z1:n ) = (14) where Pr(ZB ) given by (11) for each complete set B ⊆ G. The prior over ξ1:n is thus given, for ξ1:n ∈ EK , by Pr(ξ1:n ) = K! 2n −1 h=0 Kh ! α K α Γ(nj,C + K )Γ(nC −nj,C +1) α Γ(nC +1+ K ) α α Γ(nj,S + K )Γ(nS −nj,S +1) K K α j=1 Γ(nS +1+ K ) K j=1 C∈C S∈S 5 (15) where for each complete set B ⊆ G, nj,B is the number of items having feature j, j = 1, . . . , K in the set B and nB is the whole set of objects in set B. Taking the limit when K → ∞, we obtain after a few calculations Pr(ξ1:n ) = α + K[n] exp [−α ( C HnC − 2n −1 h=1 Kh ! HnS )] × C∈C + KC (nC −nj,C )!(nj,C −1)! j=1 nC ! S∈S S + KS (nS −nj,S )!(nj,S −1)! j=1 nS ! + + + + if K[n] = C KC − S KS and 0 otherwise, where KB is the number of different features possessed by objects in B. G Let En be the subset of En such that for each decomposition A, B and any (u, v) ∈ A × B: {u and v possess feature j} ⇒ ∃k ∈ A ∩ B such that {k possesses feature j}. Let ξ−k be the left-ordered + matrix obtained by removing object k from ξn and K−k be the total number of different features in G−k + ξ−k . For each feature j = 1, . . . , K−k , if ξ−k ∈ En−1 then we have   b C∈C nj,C if i = 1 S∈C nj,S Pr(ξk,j = i) = (16)  b C∈C (nC −nj,C ) if i = 0 (nS −nj,S ) S∈C nS where b is the appropriate normalizing constant then the customer k tries Poisson α {S∈S|k∈S} nC {C∈C|k∈C} new dishes. We can easily generalize this construction to a directed version D of G using arguments similar to those presented in Section 2; see Section 4 for an application to time-varying matrix factorization. 4 4.1 Applications Sharing clusters among relative groups: An alternative to HDP Consider that we are given d groups with nj data yi,j in each group, i = 1, . . . , nj , j = 1, . . . , d. We consider latent cluster variables zi,j that define the partition of the data. We will use alternatively the notation θi,j = Uzi,j in the following. Hierarchical Dirichlet Process [12] (HDP) is a very popular model for sharing clusters among related groups. It is based on a hierarchy of DPs G0 ∼ DP (γ, H), Gj |G0 ∼ DP (α, G0 ) j = 1, . . . d θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj . Under conjugacy assumptions, G0 , Gj and U can be integrated out and we can approximate the marginal posterior of (zi,j ) given y = (yi,j ) with Gibbs sampling using the Chinese restaurant franchise to sample from the full conditional p(zi,j |z−{i,j} , y). Using the graph formulation defined in Section 2, we propose an alternative to HDP. Let θ0,1 , . . . , θ0,N be N auxiliary variables belonging to what we call group 0. We define each clique Cj (j = 1, . . . , d) to be composed of elements from group j and elements from group 0. This defines a decomposable graphical model whose separator is given by the elements of group 0. We can rewrite the model in a way quite similar to HDP G0 ∼ DP (α, H), θ0,i |G0 ∼ G0 i = 1, ..., N α α Gj |θ0,1 , . . . , θ0,N ∼ DP (α + N, α+N H + α+N θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj N i=1 δθ0,i ) j = 1, . . . d, N For any subset A and j = k ∈ {1, . . . , p} we have corr(Gj (A), Gk (A)) = α+N . Again, under conjugacy conditions, we can integrate out G0 , Gj and U and approximate the marginal posterior distribution over the partition using the Chinese wedding party process defined in Section 2. Note that for latent variables zi,j , j = 1, . . . , d, associated to data, this is the usual CRP update. As in HDP, multiple layers can be added to the model. Figures 2 (a) and (b) resp. give the graphical DP alternative to HDP and 2-layer HDP. 6 z0 root z0 root corpora docs z1 z2 z1 z2 z3 z1,1 z1,2 z2,1 z2,2 z2,3 docs (a) Graphical DP alternative to HDP (b) Graphical DP alternative to 2-layer HDP Figure 2: Hierarchical Graphs of dependency with (a) one layer and (b) two layers of hierarchy. If N = 0, then Gj ∼ DP (α, H) for all j and this is equivalent to setting γ → ∞ in HDP. If N → ∞ then Gj = G0 for all j, G0 ∼ DP (α, H). This is equivalent to setting α → ∞ in the HDP. One interesting feature of the model is that, contrary to HDP, the marginal distribution of Gj at any layer of the tree is DP (α, H). As a consequence, the total number of clusters scales logarithmically (as in the usual DP) with the size of each group, whereas it scales doubly logarithmically in HDP. Contrary to HDP, there are at most N clusters shared between different groups. Our model is in that sense reminiscent of [9] where only a limited number of clusters can be shared. Note however that contrary to [9] we have a simple CRP-like process. The proposed methodology can be straightforwardly extended to the infinite HMM [12]. The main issue of the proposed model is the setting of the number N of auxiliary parameters. Another issue is that to achieve high correlation, we need a large number of auxiliary variables. Nonetheless, the computational time used to sample from auxiliary variables is negligible compared to the time used for latent variables associated to data. Moreover, it can be easily parallelized. The model proposed offers a far richer framework and ensures that at each level of the tree, the marginal distribution of the partition is given by a DP partition model. 4.2 Time-varying matrix factorization Let X1:n be an observed matrix of dimension n × D. We want to find a representation of this matrix in terms of two latent matrices Z1:n of dimension n × K and Y of dimension K × D. Here Z1:n 2 is a binary matrix whereas Y is a matrix of latent features. By assuming that Y ∼ N 0, σY IK×D and 2 X1:n = Z1:n Y + σX εn where εn ∼ N 0, σX In×D , we obtain p(X1:n |Z1:n ) ∝ −D/2 2 2 + Z+T Z+ + σX /σY IKn 1:n 1:n + (n−Kn )D σX exp − + Kn D σY 2 2 + where Σ−1 = I − Z+ Z+T Z+ + σX /σY IKn n 1:n 1:n 1:n −1 1 T −1 2 tr X1:n Σn X1:n 2σX (17) + Z+T , Kn the number of non-zero columns of 1:n + Z1:n and Z+ is the first Kn columns of Z1:n . To avoid having to set K, [5, 14] assume that Z1:n 1:n follows an IBP. The resulting posterior distribution p(Z1:n |X1:n ) can be estimated through MCMC [5] or SMC [14]. We consider here a different model where the object Xt is assumed to arrive at time index t and we want a prior distribution on Z1:n ensuring that objects close in time are more likely to possess similar features. To achieve this, we consider the simple directed graphical model D of Fig. 3 where the site numbering corresponds to a time index in that case and a perfect numbering of D is (1, 2, . . .). The set of parents pa(t) is composed of the r preceding sites {{t − r}, . . . , {t − 1}}. The time-varying IBP to sample from p(Z1:n ) associated to this directed graph follows from (16) and proceeds as follows. At time t = 1 + new new • Sample K1 ∼Poisson(α), set z1,i = 1 for i = 1, ..., K1 and set K1 = Knew . At times t = 2, . . . , r n + new ∼Poisson( α ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( 1:t−1,k ) and Kt t t 7   ?  ? - t−r - t−r+1 - . . . - t−1 - t - t+1        6 6 Figure 3: Directed graph. At times t = r + 1, . . . , n n + α new ∼Poisson( r+1 ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( t−r:t−1,k ) and Kt r+1 + Here Kt is the total number of features appearing from time max(1, t − r) to t − 1 and nt−r:t−1,k the restriction of n1:t−1 to the r last customers. Using (17) and the prior distribution of Z1:n which can be sampled using the time-varying IBP described above, we can easily design an SMC method to sample from p(Z1:n |X1:n ). We do not detail it here. Note that contrary to [14], our algorithm does not require inverting a matrix whose dimension grows linearly with the size of the data but only a matrix of dimension r × r. In order to illustrate the model and SMC algorithm, we create 200 6 × 6 images using a ground truth Y consisting of 4 different 6 × 6 latent images. The 200 × 4 binary matrix was generated from Pr(zt,k = 1) = πt,k , where πt = ( .6 .5 0 0 ) if t = 1, . . . , 30, πt = ( .4 .8 .4 0 ) if t = 31, . . . , 50 and πt = ( 0 .3 .6 .6 ) if t = 51, . . . , 200. The order of the model is set to r = 50. The feature occurences Z1:n and true features Y and their estimates are represented in Figure 4. Two spurious features are detected by the model (features 2 and 5 on Fig. 3(c)) but quickly discarded (Fig. 4(d)). The algorithm is able to correctly estimate the varying prior occurences of the features over time. Feature1 Feature2 Feature1 Feature2 Feature3 20 20 40 40 60 60 Feature4 80 100 Feature4 Feature5 Feature6 Time Feature3 Time 80 100 120 120 140 140 160 160 180 200 180 1 2 3 200 4 Feature (a) 1 2 3 4 5 6 Feature (b) (c) (d) Figure 4: (a) True features, (b) True features occurences, (c) MAP estimate ZM AP and (d) associated E[Y|ZM AP ] t=20 t=50 t=20 t=50 t=100 t=200 t=100 t=200 (a) (b) Figure 5: (a) E[Xt |πt , Y] and (b) E[Xt |X1:t−1 ] at t = 20, 50, 100, 200. 5 Related work and Discussion The fixed-lag version of the time-varying DP of Caron et al. [1] is a special case of the proposed model when G is given by Fig. 3. The bivariate DP of Walker and Muliere [13] is also a special case when G has only two cliques. In this paper, we have assumed that the structure of the graph was known beforehand and we have shown that many flexible models arise from this framework. It would be interesting in the future to investigate the case where the graphical structure is unknown and must be estimated from the data. Acknowledgment The authors thank the reviewers for their comments that helped to improve the writing of the paper. 8 References [1] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In Uncertainty in Artificial Intelligence, 2007. [2] A.P. Dawid and S.L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21:1272–1317, 1993. [3] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995. [4] P. Fearnhead. Particle filters for mixture models with an unknown number of components. Statistics and Computing, 14:11–21, 2004. [5] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 2006. [6] D. Heinz. Building hyper dirichlet processes for graphical models. Electonic Journal of Statistics, 3:290–315, 2009. [7] J.F.C. Kingman. Random partitions in population genetics. Proceedings of the Royal Society of London, 361:1–20, 1978. [8] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [9] P. M¨ ller, F. Quintana, and G. Rosner. A method for combining inference across related nonu parametric Bayesian models. Journal of the Royal Statistical Society B, 66:735–749, 2004. [10] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [11] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability theory and related fields, 102:145–158, 1995. [12] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581, 2006. [13] S. Walker and P. Muliere. A bivariate Dirichlet process. Statistics and Probability Letters, 64:1–7, 2003. [14] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems, 2007. 9

6 0.71996135 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

7 0.71355832 215 nips-2009-Sensitivity analysis in HMMs with application to likelihood maximization

8 0.71178436 172 nips-2009-Nonparametric Bayesian Texture Learning and Synthesis

9 0.711061 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

10 0.71018028 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

11 0.7068091 97 nips-2009-Free energy score space

12 0.70317185 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models

13 0.70227468 260 nips-2009-Zero-shot Learning with Semantic Output Codes

14 0.70176935 129 nips-2009-Learning a Small Mixture of Trees

15 0.70136023 226 nips-2009-Spatial Normalized Gamma Processes

16 0.70135814 192 nips-2009-Posterior vs Parameter Sparsity in Latent Variable Models

17 0.70134395 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

18 0.69996107 224 nips-2009-Sparse and Locally Constant Gaussian Graphical Models

19 0.69951624 150 nips-2009-Maximum likelihood trajectories for continuous-time Markov chains

20 0.69857621 38 nips-2009-Augmenting Feature-driven fMRI Analyses: Semi-supervised learning and resting state activity