nips nips2009 nips2009-23 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Baback Moghaddam, Emtiyaz Khan, Kevin P. Murphy, Benjamin M. Marlin
Abstract: We make several contributions in accelerating approximate Bayesian structural inference for non-decomposable GGMs. Our first contribution is to show how to efficiently compute a BIC or Laplace approximation to the marginal likelihood of non-decomposable graphs using convex methods for precision matrix estimation. This optimization technique can be used as a fast scoring function inside standard Stochastic Local Search (SLS) for generating posterior samples. Our second contribution is a novel framework for efficiently generating large sets of high-quality graph topologies without performing local search. This graph proposal method, which we call “Neighborhood Fusion” (NF), samples candidate Markov blankets at each node using sparse regression techniques. Our third contribution is a hybrid method combining the complementary strengths of NF and SLS. Experimental results in structural recovery and prediction tasks demonstrate that NF and hybrid NF/SLS out-perform state-of-the-art local search methods, on both synthetic and real-world datasets, when realistic computational limits are imposed.
Reference: text
sentIndex sentText sentNum sentScore
1 Our first contribution is to show how to efficiently compute a BIC or Laplace approximation to the marginal likelihood of non-decomposable graphs using convex methods for precision matrix estimation. [sent-12, score-0.435]
2 This optimization technique can be used as a fast scoring function inside standard Stochastic Local Search (SLS) for generating posterior samples. [sent-13, score-0.14]
3 Our second contribution is a novel framework for efficiently generating large sets of high-quality graph topologies without performing local search. [sent-14, score-0.131]
4 This graph proposal method, which we call “Neighborhood Fusion” (NF), samples candidate Markov blankets at each node using sparse regression techniques. [sent-15, score-0.343]
5 Experimental results in structural recovery and prediction tasks demonstrate that NF and hybrid NF/SLS out-perform state-of-the-art local search methods, on both synthetic and real-world datasets, when realistic computational limits are imposed. [sent-17, score-0.217]
6 1 Introduction There are two main reasons to learn the structure of graphical models: knowledge discovery (to interpret the learned topology) and density estimation (to compute log-likelihoods and make predictions). [sent-18, score-0.139]
7 The main difficulty in graphical model structure learning is that the hypothesis space is extremely large, containing up to 2d(d−1)/2 graphs on d nodes. [sent-19, score-0.277]
8 It is therefore advantageous to adopt a Bayesian approach and maintain an approximate posterior over graphs instead of using a single “best” graph, especially since Bayesian model averaging (BMA) can improve predictions. [sent-21, score-0.282]
9 MCMC and SLS methods for DAGs exploit the important fact that the marginal likelihood of a DAG, or an approximation such as the Bayesian Information Criterion (BIC) score, can be computed very efficiently under standard assumptions including independent conjugate priors, and complete data. [sent-23, score-0.19]
10 In contrast, undirected graphs (UGs) avoid these issues and may be a more natural representation for some problems. [sent-29, score-0.22]
11 Most prior work on Bayesian inference for Gaussian Graphical Models (GGMs) has focused on the special case of decomposable graphs (e. [sent-32, score-0.45]
12 The popularity of decomposable GGMs is mostly due to the fact that one can compute the marginal likelihood in closed form using similar assumptions to the DAG case. [sent-35, score-0.364]
13 In addition, one can update the marginal likelihood in constant time after single-edge moves in graph space [17]. [sent-36, score-0.267]
14 However, the space of decomposable graphs is much smaller than the space of general undirected graphs. [sent-37, score-0.423]
15 For example, the number of decomposable graphs on d nodes for d = 2, . [sent-38, score-0.423]
16 If we divide the number of decomposable graphs by the number of general undirected graphs, we get the “volume” ratios: 1, 1, 0. [sent-43, score-0.423]
17 Several authors have studied Bayesian inference for GGM structure in the general case using approximations to the marginal likelihood based on Monte Carlo methods (e. [sent-50, score-0.242]
18 However, these methods cannot scale to large graphs because of the high computational cost of Monte Carlo approximation. [sent-53, score-0.22]
19 In Section 2, we show how to efficiently compute BIC and Laplace approximations to the marginal likelihood p(D|G) by using recent convex optimization methods for estimating the precision matrix of a GGM. [sent-55, score-0.269]
20 In Section 3, we present a novel framework for generating large sets of high-quality graphs which we call “Neighborhood Fusion” (NF). [sent-56, score-0.245]
21 It then specifies rules for “fusing” these local densities (via sampling) into an approximate posterior over whole graphs p(G|D). [sent-58, score-0.282]
22 For knowledge discovery, we measure structural recovery in terms of accuracy of finding true edges in synthetic GGMs (with known structure). [sent-61, score-0.148]
23 We show that the proposed NF and hybrid NF/SLS methods for general graphs outperform current approaches to GGM learning for both decomposable and general (non-decomposable) graphs. [sent-63, score-0.487]
24 Throughout this paper we will view the marginal likelihood p(D|G) as the key to structural inference and as being equivalent to the graph posterior p(G|D) by adopting a flat structural prior p(G) w. [sent-64, score-0.476]
25 2 Marginal Likelihood for General Graphs In this section we will review the G-Wishart distribution and discuss approximations to the marginal likelihood of a non-decomposable GGM under the G-Wishart prior. [sent-68, score-0.215]
26 Unlike the decomposable case, here the marginal likelihood can not be found in closed form. [sent-69, score-0.364]
27 Graph edges are denoted by unordered pairs (i, j) and the edge (i, j) is in the graph G if Gij = 1. [sent-75, score-0.174]
28 We denote the prior distribution over precision matrices given a graph G by p(Ω|G). [sent-80, score-0.16]
29 The standard measure of model quality in the Bayesian model selection setting is the marginal likelihood p(D|G) ++ which is obtained by integrating p(D|Ω)p(Ω|G) over the space SG as shown in Equation 2. [sent-81, score-0.161]
30 n N (xi | 0, Ω−1 ) ∝ |Ω|n/2 exp(− p(D|Ω) = i=1 1 Ω, S ) 2 (1) (2) p(D|Ω) p(Ω|G) dΩ p(D|G) = ++ SG The G-Wishart density in Equation 3 is the Diaconis-Ylvisaker conjugate form [10] for the GGM ++ likelihood as shown in [27]. [sent-82, score-0.123]
31 The resulting marginal likelihood is then the ratio of the two normalizing terms shown in Equation 5 (which we refer to as Zn and Z0 for short). [sent-87, score-0.161]
32 The main drawback of the G-Wishart for general graphs, compared to the HIW for decomposable graphs, is that one cannot compute the normalization terms Zn and Z0 in closed form. [sent-88, score-0.203]
33 As a result, Bayesian model selection for non-decomposable GGMs relies on approximating the marginal likelihood p(D|G). [sent-89, score-0.161]
34 Due to the high computational cost of Monte Carlo and Laplace approximation in high dimensions, we consider two alternative marginal likelihood approximations that are significantly more efficient. [sent-98, score-0.215]
35 Therefore, all three approximations will require finding the mode of a G-Wishart (for the posterior and/or the prior). [sent-103, score-0.164]
36 We apply this method to find ΩG when computing BIC scores, as well as the prior and posterior G-Wishart modes when computing Laplace approximations to Z 0 and Zn . [sent-107, score-0.116]
37 Observe that we can express the mode of any G-Wishart distribution with the optimization problem in Equation 7, where the density is parameterized by graph G, degree δ and the scatter matrix S. [sent-108, score-0.231]
38 ˆ ΩG = arg max log W (Ω|G, δ, S) = arg min − log |Ω| + ++ ++ Ω∈SG Ω∈SG Ω, S δ−2 (7) This “COVSEL” type problem [9] is equivalent to finding the maximum likelihood precision matrix of a GGM with known structure G, and is a convex optimization problem. [sent-109, score-0.116]
39 accuracy trade-off of the various marginal likelihood approximation schemes discussed above; comparing full-Laplace, diagonal-Laplace and the BIC score functions to the marginal likelihood values obtained with the Monte Carlo method of [3]. [sent-122, score-0.431]
40 An important advantage of working with general graphs, instead of decomposable graphs, is that we can leverage simple and stable methods for quickly exploring Markov blankets. [sent-124, score-0.203]
41 A related approach, proposed in [23], uses l1 -regularized linear regression or Lasso to identify the Markov blanket (MB) of each node. [sent-128, score-0.139]
42 These Markov blankets are then combined using intersection or union (AND/OR) to give the global graph G. [sent-129, score-0.188]
43 Our NF framework uses a Markov blanket finding method to derive a set of probability distributions over the local topology at each node, and specifies a rule for combining these into an approximate posterior over graphs. [sent-131, score-0.175]
44 Denote the set of Markov blankets for node i by N i 2. [sent-134, score-0.134]
45 Compute the linear regression scores s(b) for each Markov blanket b in Ni , and define pi (b) = exp(s(b))/( b ∈Ni exp(s(b ))) as the node’s Markov blanket proposal density 3. [sent-135, score-0.405]
46 Independently sample a Markov blanket for each node i from its proposal density p i (b), and then combine all d sampled Markov blankets to assemble a single graph G 4. [sent-136, score-0.431]
47 Find G’s precision matrix using Equation 7 and compute the graph score as in Section 2 5. [sent-137, score-0.269]
48 In all the results that follow we use the linear regression BIC score induced by regressing node i on Ni , and generate whole graphs by intersecting the Markov blankets using the AND operator. [sent-139, score-0.489]
49 This essentially constitutes sampling from the “ANDcensored” pseudo marginal likelihood and is therefore likely to produce good candidate MBs that can be fused into high-quality graphs. [sent-140, score-0.2]
50 Indeed, the best NF-sampled graphs typically have higher scores than the pseudo “MAP” graph obtained by simply intersecting the best MBs [23], due to the inherent noise in the linear regression BIC scores and the possibility of over-fitting. [sent-142, score-0.541]
51 Thereafter, we sample full graphs in O(d2 ) time (since we are sampling a discrete p. [sent-153, score-0.22]
52 Like MCMC methods, SLS explores high probability regions of graph space, but unlike MCMC it computes approximate model probabilities directly for each graph it visits. [sent-158, score-0.212]
53 In this section we discuss SLS for both decomposable and general (non-decomposable) GGMs. [sent-161, score-0.203]
54 Specifically, we describe new initialization and edge-marginal updating methods for nondecomposable GGMs, and also introduce a highly effective hybrid NF/SLS method. [sent-162, score-0.127]
55 SLS with decomposable graphs has the advantage that its natural scoring function, the marginal likelihood, can be computed exactly under the conjugate Hyper Inverse Wishart prior. [sent-163, score-0.604]
56 The marginal likelihood can also be updated efficiently when local changes are made to the underlying graph. [sent-164, score-0.161]
57 A state-of-the-art SLS method for decomposable GGMs is given in [29], which can be used with an arbitrary score function over the space of general graphs. [sent-165, score-0.312]
58 Here we consider SLS for general graphs using the Laplace score described in Section 2. [sent-166, score-0.329]
59 If the resulting graph is admissible and has not been visited before, this graph becomes Gt+1 , and we evaluate its score. [sent-168, score-0.212]
60 In the decomposable case, only decomposable graphs are admissible. [sent-170, score-0.626]
61 We should note that unlike exhaustive search methods, this method avoids evaluating the score of all O(d2 ) neighboring graphs at each iteration, and instead picks one at random. [sent-171, score-0.36]
62 First, the marginal edge probabilities qij are updated online, so edges that have proved useful in the past are more likely to be proposed in the future. [sent-173, score-0.226]
63 In a resampling step we set Gt+1 to Gv , where v ≤ t, with probability proportional to the score (or exponentiated score) of Gv . [sent-175, score-0.169]
64 In a global move we sample a completely new graph (based on the edge marginals q ij ) for Gt+1 . [sent-176, score-0.182]
65 We now propose a new initialization and updating scheme for non-decomposable SLS based on 1 k a set of k initial graphs G1 , . [sent-190, score-0.22]
66 l) l=1 p(D|G SLS’s main drawback is that, if started from the empty graph as in [29], it will necessarily take at least E steps to find the highest scoring graph, where E is the number of true edges. [sent-200, score-0.159]
67 An even better initialization strategy, for nondecomposable graphs, is to “seed” SLS with a batch of NF-sampled graphs for G 1 , . [sent-203, score-0.283]
68 We refer to this new method, where NF is used to both initialize the edge-marginals and seed the graph history, as hybrid NF/SLS. [sent-208, score-0.17]
69 accuracy trade-off of the different marginal likelihood approximations in Section 2. [sent-210, score-0.215]
70 , 16, we sample 100 random, sparse precision matrices with an average edge density of 0. [sent-215, score-0.159]
71 Using each approximation method, we score all d(d − 1)/2 neighbors of G obtained from G by single edge flips. [sent-218, score-0.151]
72 We then compute a posterior distribution over this set of graphs by normalizing the scores (or exponentiated scores as appropriate). [sent-219, score-0.467]
73 In Figure 1(a) we show the average error of these posterior approximations as a function of data dimensionality d, as measured by KL divergence. [sent-224, score-0.116]
74 In Figure 1(b) we show the average time required to score a single graph as a function of graph size d. [sent-225, score-0.321]
75 We sample 5000 graphs for each of the NF methods and run each of the SLS methods for 5000 steps, also producing 5000 graphs. [sent-234, score-0.22]
76 We compute the score for each set of graphs (diagonal-Laplace for non-decomposable and exact marginal likelihood for decomposables). [sent-236, score-0.49]
77 We extract the 100 best graphs by score, and produce an approximation to p(G|D) by normalizing the exponentiated scores. [sent-237, score-0.255]
78 We report results for individual graphs in the best 100, but our main focus is on performance statistics under Bayesian model averaging (BMA) with approximate scores of each method. [sent-238, score-0.295]
79 For this task we use the “Mutual Funds” (MF) dataset used by [29] for SLS with decomposable GGMs, with d = 59, which they split into 60 months of training data and 26 months of test data. [sent-242, score-0.203]
80 In Figure 1(c) we show a trace plot of scores for the SLS methods and best scores for the NF and tree methods. [sent-245, score-0.15]
81 Interestingly, on this small data set SLS for general graphs (GLS-T) performs rather well. [sent-252, score-0.22]
82 In the second set of tasks, we evaluate the structural recovery of each method by measuring the true positive and false positive rates for edge inclusion w. [sent-254, score-0.129]
83 The rates for individual graphs are shown as small grey symbols while the BMA rate is shown with a large bold colored symbol. [sent-265, score-0.22]
84 For the d = 59 MF dataset in Figure 1(c), NF-sampling 5000 graphs and doing the G-Wishart modefits and diagonal-Laplace scoring, takes a total of 13 mins, and likewise 30 mins for the synthetic d = 100 dataset in Figure 3. [sent-268, score-0.343]
85 Generating and scoring 5000 graphs with non-decomposable SLS takes 37 mins on the MF dataset and 59 mins on the synthetic one. [sent-269, score-0.484]
86 Decomposable SLS takes 31 mins on MF and 43 mins on the synthetic. [sent-270, score-0.176]
87 The top 100 graphs are shown with a grey symbol and the bold colored symbol is the BMA graph. [sent-287, score-0.22]
88 6 Discussion We offer a practical framework for fast inference in non-decomposable GGMs providing reasonable accuracy for marginal likelihoods. [sent-288, score-0.126]
89 For example, scoring all the neighbors of a 150-node graph via SLS required over 40 days of computation in [20]. [sent-290, score-0.159]
90 We chose Laplace for both Z n and Z0 solely for speed (to avoid MC altogether) and found good agreement between full-Laplace and BIC for much larger graphs than in Figure 1(a). [sent-293, score-0.245]
91 Our Laplace scores also roughly matched the MC values for the Fisher Iris data in [3], selecting essentially the same top-ranked 16 graphs (see Figure 5 in [3]). [sent-294, score-0.295]
92 We note that NF can be partially justified as a pseudo marginal likelihood (PML), but whereas most authors rely only on its maximizer [23] we exploit the full (pseudo) density. [sent-302, score-0.2]
93 Without the AND filter, NF-drawn MBs are sampled from a set of “consistent” full-conditionals in the sense of Besag [5], and their max-BIC MBs are collectively the PML mode (note that here we mean the node regression BIC, not graph BIC). [sent-303, score-0.232]
94 We can also view NF as an overdispersed proposal density; its weighted graphs a rough proxy for p(G|D). [sent-306, score-0.266]
95 This approximation may be biased but our results show it is quite useful for prediction and imputation (and seeding SLS with high-quality graphs). [sent-307, score-0.129]
96 Bayesian covariance matrix estimation using a mixture of decomposable graphical models. [sent-321, score-0.285]
97 A Monte Carlo method for computing the marginal likelihood in nondecomposable Gaussian graphical models. [sent-326, score-0.281]
98 Bayesian structural learning and estimation in Gaussian graphical models. [sent-439, score-0.142]
99 High dimensional graphs and variable selection with the Lasso. [sent-453, score-0.22]
100 Hyper inverse Wishart distribution for non-decomposable graphs and its application to Bayesian inference for Gaussian graphical models. [sent-474, score-0.304]
wordName wordTfidf (topN-words)
[('nf', 0.54), ('sls', 0.407), ('graphs', 0.22), ('decomposable', 0.203), ('gls', 0.203), ('bic', 0.2), ('laplace', 0.148), ('sg', 0.137), ('ggms', 0.123), ('blanket', 0.113), ('bma', 0.109), ('score', 0.109), ('graph', 0.106), ('marginal', 0.099), ('dls', 0.094), ('imputation', 0.094), ('mbs', 0.094), ('dag', 0.088), ('mins', 0.088), ('blankets', 0.082), ('ugs', 0.078), ('scores', 0.075), ('mf', 0.071), ('monte', 0.07), ('mb', 0.069), ('carlo', 0.065), ('hybrid', 0.064), ('ggm', 0.063), ('nondecomposable', 0.063), ('posterior', 0.062), ('likelihood', 0.062), ('structural', 0.06), ('qij', 0.059), ('graphical', 0.057), ('bayesian', 0.055), ('approximations', 0.054), ('precision', 0.054), ('scoring', 0.053), ('markov', 0.052), ('node', 0.052), ('mode', 0.048), ('fusion', 0.047), ('gij', 0.047), ('fpr', 0.047), ('moghaddam', 0.047), ('tpr', 0.047), ('proposal', 0.046), ('zn', 0.046), ('scatter', 0.045), ('mc', 0.042), ('edge', 0.042), ('pseudo', 0.039), ('carter', 0.038), ('dags', 0.038), ('wishart', 0.038), ('biometrika', 0.035), ('exponentiated', 0.035), ('seeding', 0.035), ('synthetic', 0.035), ('greedy', 0.034), ('ij', 0.034), ('ips', 0.033), ('density', 0.032), ('gt', 0.032), ('baback', 0.031), ('diagonallaplace', 0.031), ('dof', 0.031), ('hiw', 0.031), ('pml', 0.031), ('propulsion', 0.031), ('tfij', 0.031), ('ug', 0.031), ('hessian', 0.031), ('search', 0.031), ('sparse', 0.031), ('hyper', 0.031), ('conjugate', 0.029), ('british', 0.028), ('ratios', 0.028), ('recovery', 0.027), ('emtiyaz', 0.027), ('dobra', 0.027), ('giudici', 0.027), ('glasso', 0.027), ('gv', 0.027), ('hans', 0.027), ('imputed', 0.027), ('hastie', 0.027), ('inference', 0.027), ('edges', 0.026), ('regression', 0.026), ('columbia', 0.026), ('mcmc', 0.026), ('estimation', 0.025), ('resampling', 0.025), ('fij', 0.025), ('wong', 0.025), ('speed', 0.025), ('generating', 0.025), ('discovery', 0.025)]
simIndex simValue paperId paperTitle
same-paper 1 0.9999997 23 nips-2009-Accelerating Bayesian Structural Inference for Non-Decomposable Gaussian Graphical Models
Author: Baback Moghaddam, Emtiyaz Khan, Kevin P. Murphy, Benjamin M. Marlin
Abstract: We make several contributions in accelerating approximate Bayesian structural inference for non-decomposable GGMs. Our first contribution is to show how to efficiently compute a BIC or Laplace approximation to the marginal likelihood of non-decomposable graphs using convex methods for precision matrix estimation. This optimization technique can be used as a fast scoring function inside standard Stochastic Local Search (SLS) for generating posterior samples. Our second contribution is a novel framework for efficiently generating large sets of high-quality graph topologies without performing local search. This graph proposal method, which we call “Neighborhood Fusion” (NF), samples candidate Markov blankets at each node using sparse regression techniques. Our third contribution is a hybrid method combining the complementary strengths of NF and SLS. Experimental results in structural recovery and prediction tasks demonstrate that NF and hybrid NF/SLS out-perform state-of-the-art local search methods, on both synthetic and real-world datasets, when realistic computational limits are imposed.
2 0.14429989 100 nips-2009-Gaussian process regression with Student-t likelihood
Author: Jarno Vanhatalo, Pasi Jylänki, Aki Vehtari
Abstract: In the Gaussian process regression the observation model is commonly assumed to be Gaussian, which is convenient in computational perspective. However, the drawback is that the predictive accuracy of the model can be significantly compromised if the observations are contaminated by outliers. A robust observation model, such as the Student-t distribution, reduces the influence of outlying observations and improves the predictions. The problem, however, is the analytically intractable inference. In this work, we discuss the properties of a Gaussian process regression model with the Student-t likelihood and utilize the Laplace approximation for approximate inference. We compare our approach to a variational approximation and a Markov chain Monte Carlo scheme, which utilize the commonly used scale mixture representation of the Student-t distribution. 1
3 0.12553561 42 nips-2009-Bayesian Sparse Factor Models and DAGs Inference and Comparison
Author: Ricardo Henao, Ole Winther
Abstract: In this paper we present a novel approach to learn directed acyclic graphs (DAGs) and factor models within the same framework while also allowing for model comparison between them. For this purpose, we exploit the connection between factor models and DAGs to propose Bayesian hierarchies based on spike and slab priors to promote sparsity, heavy-tailed priors to ensure identifiability and predictive densities to perform the model comparison. We require identifiability to be able to produce variable orderings leading to valid DAGs and sparsity to learn the structures. The effectiveness of our approach is demonstrated through extensive experiments on artificial and biological data showing that our approach outperform a number of state of the art methods. 1
4 0.12461907 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs
Author: Francois Caron, Arnaud Doucet
Abstract: Over recent years Dirichlet processes and the associated Chinese restaurant process (CRP) have found many applications in clustering while the Indian buffet process (IBP) is increasingly used to describe latent feature models. These models are attractive because they ensure exchangeability (over samples). We propose here extensions of these models where the dependency between samples is given by a known decomposable graph. These models have appealing properties and can be easily learned using Monte Carlo techniques. 1 Motivation The CRP and IBP have found numerous applications in machine learning over recent years [5, 10]. We consider here the case where the data we are interested in are ‘locally’ dependent; these dependencies being represented by a known graph G where each data point/object is associated to a vertex. These local dependencies can correspond to any conceptual or real (e.g. space, time) metric. For example, in the context of clustering, we might want to propose a prior distribution on partitions enforcing that data which are ‘close’ in the graph are more likely to be in the same cluster. Similarly, in the context of latent feature models, we might be interested in a prior distribution on features enforcing that data which are ‘close’ in the graph are more likely to possess similar features. The ‘standard’ CRP and IBP correspond to the case where the graph G is complete; that is it is fully connected. In this paper, we generalize the CRP and IBP to decomposable graphs. The resulting generalized versions of the CRP and IBP enjoy attractive properties. Each clique of the graph follows marginally a CRP or an IBP process and explicit expressions for the joint prior distribution on the graph is available. It makes it easy to learn those models using straightforward generalizations of Markov chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC) algorithms proposed to perform inference for the CRP and IBP [5, 10, 14]. The rest of the paper is organized as follows. In Section 2, we review the popular Dirichlet multinomial allocation model and the Dirichlet Process (DP) partition distribution. We propose an extension of these two models to decomposable graphical models. In Section 3 we discuss nonparametric latent feature models, reviewing briefly the construction in [5] and extending it to decomposable graphs. We demonstrate these models in Section 4 on two applications: an alternative to the hierarchical DP model [12] and a time-varying matrix factorization problem. 2 Prior distributions for partitions on decomposable graphs Assume we have n observations. When performing clustering, we associate to each of this observation an allocation variable zi ∈ [K] = {1, . . . , K}. Let Πn be the partition of [n] = {1, . . . , n} defined by the equivalence relation i ↔ j ⇔ zi = zj . The resulting partition Πn = {A1 , . . . , An(Πn ) } 1 is an unordered collection of disjoint non-empty subsets Aj of [n], j = 1, . . . , n(Πn ), where ∪j Aj = [n] and n(Πn ) is the number of subsets for partition Πn . We also denote by Pn be the set of all partitions of [n] and let nj , j = 1, . . . , n(Πn ), be the size of the subset Aj . Each allocation variable zi is associated to a vertex/site of an undirected graph G, which is assumed to be known. In the standard case where the graph G is complete, we first review briefly here two popular prior distributions on z1:n , equivalently on Πn . We then extend these models to undirected decomposable graphs; see [2, 8] for an introduction to decomposable graphs. Finally we briefly discuss the directed case. Note that the models proposed here are completely different from the hyper multinomial-Dirichlet in [2] and its recent DP extension [6]. 2.1 Dirichlet multinomial allocation model and DP partition distribution Assume for the time being that K is finite. When the graph is complete, a popular choice for the allocation variables is to consider a Dirichlet multinomial allocation model [11] θ θ , . . . , ), zi |π ∼ π (1) K K where D is the standard Dirichlet distribution and θ > 0. Integrating out π, we obtain the following Dirichlet multinomial prior distribution π ∼ D( Pr(z1:n ) = K j=1 Γ(θ) Γ(nj + θ K) (2) θ Γ(θ + n)Γ( K )K and then, using the straightforward equality Pr(Πn ) = PK where PK = {Πn ∈ Pn |n(Πn ) ≤ K}, we obtain K! (K−n(Πn ))! Pr(z1:n ) valid for for all Πn ∈ n(Π ) Pr(Πn ) = θ Γ(θ) j=1n Γ(nj + K ) K! . θ (K − n(Πn ))! Γ(θ + n)Γ( K )n(Πn ) (3) DP may be seen as a generalization of the Dirichlet multinomial model when the number of components K → ∞; see for example [10]. In this case the distribution over the partition Πn of [n] is given by [11] n(Π ) θn(Πn ) j=1n Γ(nj ) . (4) Pr(Πn ) = n i=1 (θ + i − 1) Let Π−k = {A1,−k , . . . , An(Π−k ),−k } be the partition induced by removing item k to Πn and nj,−k be the size of cluster j for j = 1, . . . , n(Π−k ). It follows from (4) that an item k is assigned to an existing cluster j, j = 1, . . . , n(Π−k ), with probability proportional to nj,−k / (n − 1 + θ) and forms a new cluster with probability θ/ (n − 1 + θ). This property is the basis of the CRP. We now extend the Dirichlet multinomial allocation and the DP partition distribution models to decomposable graphs. 2.2 Markov combination of Dirichlet multinomial and DP partition distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. It can be easily checked that if the marginal distribution of zC for each clique C ∈ C is defined by (2) then these distributions are consistent as they yield the same distribution (2) over the separators. Therefore, the unique Markov distribution over G with Dirichlet multinomial distribution over the cliques is defined by [8] Pr(zC ) S∈S Pr(zS ) C∈C Pr(z1:n ) = (5) where for each complete set B ⊆ G, we have Pr(zB ) given by (2). It follows that we have for any Πn ∈ PK Γ(θ) K! Pr(Πn ) = (K − n(Πn ))! C∈C Γ(θ) S∈S 2 K j=1 θ Γ(nj,C + K ) θ Γ(θ+nC )Γ( K )K K j=1 θ Γ(nj,S + K ) θ Γ(θ+nS )Γ( K )K (6) where for each complete set B ⊆ G, nj,B is the number of items associated to cluster j, j = 1, . . . , K in B and nB is the total number of items in B. Within each complete set B, the allocation variables define a partition distributed according to the Dirichlet-multinomial distribution. We now extend this approach to DP partition distributions; that is we derive a joint distribution over Πn such that the distribution of ΠB over each complete set B of the graph is given by (4) with θ > 0. Such a distribution satisfies the consistency condition over the separators as the restriction of any partition distributed according to (4) still follows (4) [7]. G Proposition. Let Pn be the set of partitions Πn ∈ Pn such that for each decomposition A, B, and any (i, j) ∈ A × B, i ↔ j ⇒ ∃k ∈ A ∩ B such that k ↔ i ↔ j. As K → ∞, the prior distribution G over partitions (6) is given for each Πn ∈ Pn by Pr(Πn ) = θn(Πn ) n(ΠC ) Γ(nj,C ) j=1 nC i=1 (θ+i−1) n(ΠS ) Γ(nj,S ) j=1 nS (θ+i−1) i=1 C∈C S∈S (7) where n(ΠB ) is the number of clusters in the complete set B. Proof. From (6), we have θ n(ΠC ) K(K − 1) . . . (K − n(Πn ) + 1) Pr(Πn ) = K C∈C n(ΠC )− S∈S n(ΠS ) C∈C θ n(ΠS ) S∈S n(ΠC ) θ Γ(nj,C + K ) j=1 nC (θ+i−1) i=1 n(ΠS ) θ Γ(nj,S + K ) j=1 nS (θ+i−1) i=1 Thus when K → ∞, we obtain (7) if n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) and 0 otherwise. We have n(Πn ) ≤ C∈C n(ΠC ) − S∈S n(ΠS ) for any Πn ∈ Pn and the subset of Pn verifying G n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) corresponds to the set Pn . Example. Let the notation i ∼ j (resp. i j) indicates an edge (resp. no edge) between two sites. Let n = 3 and G be the decomposable graph defined by the relations 1 ∼ 2, 2 ∼ 3 and 1 3. G The set P3 is then equal to {{{1, 2, 3}}; {{1, 2}, {3}}; {{1}, {2, 3}}; {{1}, {2}, {3}}}. Note that G the partition {{1, 3}, {2}} does not belong to P3 . Indeed, as there is no edge between 1 and 3, they cannot be in the same cluster if 2 is in another cluster. The cliques are C1 = {1, 2} and C2 = {2, 3} Pr(ΠC1 ) Pr(ΠC2 ) hence we can and the separator is S2 = {2}. The distribution is given by Pr(Π3 ) = Pr(ΠS ) 2 check that we obtain Pr({1, 2, 3}) = (θ + 1)−2 , Pr({1, 2}, {3}) = Pr({1, 2}, {3}) = θ(θ + 1)−2 and Pr({1}, {2}, {3}) = θ2 (θ + 1)−2 . Let now define the full conditional distributions. Based on (7) the conditional assignment of an item k is proportional to the conditional over the cliques divided by the conditional over the separators. G Let denote G−k the undirected graph obtained by removing vertex k from G. Suppose that Πn ∈ Pn . G−k If Π−k ∈ Pn−1 , then do not change the value of item k. Otherwise, item k is assigned to cluster j / where j = 1, . . . , n(Π−k ) with probability proportional to {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (8) and to a new cluster with probability proportional to θ, where n−k,j,C is the number of items in the set C \ {k} belonging to cluster j. The updating process is illustrated by the Chinese wedding party process1 in Fig. 1. The results of this section can be extended to the Pitman-Yor process, and more generally to species sampling models. Example (continuing). Given Π−2 = {A1 = {1}, A2 = {3}}, we have −1 Pr( item 2 assigned to A1 = {1}| Π−2 ) = Pr( item 2 assigned to A2 = {3}| Π−2 ) = (θ + 2) −1 and Pr( item 2 assigned to new cluster A3 | Π−2 ) = θ (θ + 2) . Given Π−2 = {A1 = {1, 3}}, item 2 is assigned to A1 with probability 1. 1 Note that this representation describes the full conditionals while the CRP represents the sequential updat- ing. 3 (a) (b) (d) (c) (e) Figure 1: Chinese wedding party. Consider a group of n guests attending a wedding party. Each of the n guests may belong to one or several cliques, i.e. maximal groups of people such that everybody knows everybody. The belonging of each guest to the different cliques is represented by color patches on the figures, and the graphical representation of the relationship between the guests is represented by the graphical model (e). (a) Suppose that the guests are already seated such that two guests cannot be together at the same table is they are not part of the same clique, or if there does not exist a group of other guests such that they are related (“Any friend of yours is a friend of mine”). (b) The guest number k leaves his table and either (c) joins a table where there are guests from the same clique as him, with probability proportional to the product of the number of guests from each clique over the product of the number of guests belonging to several cliques on that table or (d) he joins a new table with probability proportional to θ. 2.3 Monte Carlo inference 2.3.1 MCMC algorithm Using the full conditionals, a single site Gibbs sampler can easily be designed to approximate the posterior distribution Pr(Πn |z1:n ). Given a partition Πn , an item k is taken out of the partition. If G−k Π−k ∈ Pn−1 , item k keeps the same value. Otherwise, the item will be assigned to a cluster j, / j = 1, . . . , n(Π−k ), with probability proportional to p(z{k}∪Aj,−k ) × p(zAj,−k ) {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (9) and the item will be assigned to a new cluster with probability proportional to p(z{k} ) × θ. Similarly to [3], we can also define a procedure to sample from p(θ|n(Πn ) = k)). We assume that θ ∼ G(a, b) and use p auxiliary variables x1 , . . . , xp . The procedure is as follows. • For j = 1, . . . , p, sample xj |k, θ ∼ Beta(θ + nSj , nCj − nSj ) • Sample θ|k, x1:p ∼ G(a + k, b − j log xj ) 2.3.2 Sequential Monte Carlo We have so far only treated the case of an undirected decomposable graph G. We can formulate a sequential updating rule for the corresponding perfect directed version D of G. Indeed, let (a1 , . . . a|V | ) be a perfect ordering and pa(ak ) be the set of parents of ak which is by definition complete. Let Πk−1 = {A1,k−1 , . . . , An(Πk−1 ),k−1 } denote the partition of the first k−1 vertices a1:k−1 and let nj,pa(ak ) be the number of elements with value j in the set pa(ak ), j = 1, . . . , n(Πk−1 ). Then the vertex ak joins the set j with probability nj,pa(ak ) / θ + cluster with probability θ/ θ + q q nq,pa(ak ) and creates a new nq,pa(ak ) . One can then design a particle filter/SMC method in a similar fashion as [4]. Consider a set of (i) (i) (i) (i) N N particles Πk−1 with weights wk−1 ∝ Pr(Πk−1 , z1:k−1 ) ( i=1 wk−1 = 1) that approximate (i) the posterior distribution Pr(Πk−1 |z1:k−1 ). For each particle i, there are n(Πk−1 ) + 1 possible 4 (i,j) allocations for component ak . We denote Πk the partition obtained by associating component ak (i,j) to cluster j. The weight associated to Πk is given by nj,pa(ak ) (i) if j = 1, . . . , n(Πk−1 ) θ+ q nq,pa(ak ) (i,j) (i) p(z{ak }∪Aj,k−1 ) wk−1 = wk−1 × (10) (i) θ θ+ n p(zAj,k−1 ) if j = n(Πk−1 ) + 1 q q,pa(ak ) (i,j) Then we can perform a deterministic resampling step by keeping the N particles Πk with highest (i,j) (i) (i) weights wk−1 . Let Πk be the resampled particles and wk the associated normalized weights. 3 Prior distributions for infinite binary matrices on decomposable graphs Assume we have n objects; each of these objects being associated to the vertex of a graph G. To K each object is associated a K-dimensional binary vector zn = (zn,1 , . . . , zn,K ) ∈ {0, 1} where zn,i = 1 if object n possesses feature i and zn,i = 0 otherwise. These vectors zt form a binary n × K matrix denoted Z1:n . We denote by ξ1:n the associated equivalence class of left-ordered matrices and let EK be the set of left-ordered matrices with at most K features. In the standard case where the graph G is complete, we review briefly here two popular prior distributions on Z1:n , equivalently on ξ1:n : the Beta-Bernoulli model and the IBP [5]. We then extend these models to undirected decomposable graphs. This can be used for example to define a time-varying IBP as illustrated in Section 4. 3.1 Beta-Bernoulli and IBP distributions The Beta-Bernoulli distribution over the allocation Z1:n is K Pr(Z1:n ) = α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) α K Γ(nj j=1 (11) where nj is the number of objects having feature j. It follows that Pr(ξ1:n ) = K K! 2n −1 h=0 α K Γ(nj α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) Kh ! j=1 (12) where Kh is the number of features possessing the history h (see [5] for details). The nonparametric model is obtained by taking the limit when K → ∞ Pr(ξ1:n ) = αK K+ + 2n −1 h=1 Kh ! exp(−αHn ) where K + is the total number of features and Hn = 3.2 (n − nj )!(nj − 1)! n! j=1 n 1 k=1 k . (13) The IBP follows from (13). Markov combination of Beta-Bernoulli and IBP distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. As in the Dirichlet-multinomial case, it is easily seen that if for each clique C ∈ C, the marginal distribution is defined by (11), then these distributions are consistent as they yield the same distribution (11) over the separators. Therefore, the unique Markov distribution over G with Beta-Bernoulli distribution over the cliques is defined by [8] Pr(ZC ) S∈S Pr(ZS ) C∈C Pr(Z1:n ) = (14) where Pr(ZB ) given by (11) for each complete set B ⊆ G. The prior over ξ1:n is thus given, for ξ1:n ∈ EK , by Pr(ξ1:n ) = K! 2n −1 h=0 Kh ! α K α Γ(nj,C + K )Γ(nC −nj,C +1) α Γ(nC +1+ K ) α α Γ(nj,S + K )Γ(nS −nj,S +1) K K α j=1 Γ(nS +1+ K ) K j=1 C∈C S∈S 5 (15) where for each complete set B ⊆ G, nj,B is the number of items having feature j, j = 1, . . . , K in the set B and nB is the whole set of objects in set B. Taking the limit when K → ∞, we obtain after a few calculations Pr(ξ1:n ) = α + K[n] exp [−α ( C HnC − 2n −1 h=1 Kh ! HnS )] × C∈C + KC (nC −nj,C )!(nj,C −1)! j=1 nC ! S∈S S + KS (nS −nj,S )!(nj,S −1)! j=1 nS ! + + + + if K[n] = C KC − S KS and 0 otherwise, where KB is the number of different features possessed by objects in B. G Let En be the subset of En such that for each decomposition A, B and any (u, v) ∈ A × B: {u and v possess feature j} ⇒ ∃k ∈ A ∩ B such that {k possesses feature j}. Let ξ−k be the left-ordered + matrix obtained by removing object k from ξn and K−k be the total number of different features in G−k + ξ−k . For each feature j = 1, . . . , K−k , if ξ−k ∈ En−1 then we have b C∈C nj,C if i = 1 S∈C nj,S Pr(ξk,j = i) = (16) b C∈C (nC −nj,C ) if i = 0 (nS −nj,S ) S∈C nS where b is the appropriate normalizing constant then the customer k tries Poisson α {S∈S|k∈S} nC {C∈C|k∈C} new dishes. We can easily generalize this construction to a directed version D of G using arguments similar to those presented in Section 2; see Section 4 for an application to time-varying matrix factorization. 4 4.1 Applications Sharing clusters among relative groups: An alternative to HDP Consider that we are given d groups with nj data yi,j in each group, i = 1, . . . , nj , j = 1, . . . , d. We consider latent cluster variables zi,j that define the partition of the data. We will use alternatively the notation θi,j = Uzi,j in the following. Hierarchical Dirichlet Process [12] (HDP) is a very popular model for sharing clusters among related groups. It is based on a hierarchy of DPs G0 ∼ DP (γ, H), Gj |G0 ∼ DP (α, G0 ) j = 1, . . . d θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj . Under conjugacy assumptions, G0 , Gj and U can be integrated out and we can approximate the marginal posterior of (zi,j ) given y = (yi,j ) with Gibbs sampling using the Chinese restaurant franchise to sample from the full conditional p(zi,j |z−{i,j} , y). Using the graph formulation defined in Section 2, we propose an alternative to HDP. Let θ0,1 , . . . , θ0,N be N auxiliary variables belonging to what we call group 0. We define each clique Cj (j = 1, . . . , d) to be composed of elements from group j and elements from group 0. This defines a decomposable graphical model whose separator is given by the elements of group 0. We can rewrite the model in a way quite similar to HDP G0 ∼ DP (α, H), θ0,i |G0 ∼ G0 i = 1, ..., N α α Gj |θ0,1 , . . . , θ0,N ∼ DP (α + N, α+N H + α+N θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj N i=1 δθ0,i ) j = 1, . . . d, N For any subset A and j = k ∈ {1, . . . , p} we have corr(Gj (A), Gk (A)) = α+N . Again, under conjugacy conditions, we can integrate out G0 , Gj and U and approximate the marginal posterior distribution over the partition using the Chinese wedding party process defined in Section 2. Note that for latent variables zi,j , j = 1, . . . , d, associated to data, this is the usual CRP update. As in HDP, multiple layers can be added to the model. Figures 2 (a) and (b) resp. give the graphical DP alternative to HDP and 2-layer HDP. 6 z0 root z0 root corpora docs z1 z2 z1 z2 z3 z1,1 z1,2 z2,1 z2,2 z2,3 docs (a) Graphical DP alternative to HDP (b) Graphical DP alternative to 2-layer HDP Figure 2: Hierarchical Graphs of dependency with (a) one layer and (b) two layers of hierarchy. If N = 0, then Gj ∼ DP (α, H) for all j and this is equivalent to setting γ → ∞ in HDP. If N → ∞ then Gj = G0 for all j, G0 ∼ DP (α, H). This is equivalent to setting α → ∞ in the HDP. One interesting feature of the model is that, contrary to HDP, the marginal distribution of Gj at any layer of the tree is DP (α, H). As a consequence, the total number of clusters scales logarithmically (as in the usual DP) with the size of each group, whereas it scales doubly logarithmically in HDP. Contrary to HDP, there are at most N clusters shared between different groups. Our model is in that sense reminiscent of [9] where only a limited number of clusters can be shared. Note however that contrary to [9] we have a simple CRP-like process. The proposed methodology can be straightforwardly extended to the infinite HMM [12]. The main issue of the proposed model is the setting of the number N of auxiliary parameters. Another issue is that to achieve high correlation, we need a large number of auxiliary variables. Nonetheless, the computational time used to sample from auxiliary variables is negligible compared to the time used for latent variables associated to data. Moreover, it can be easily parallelized. The model proposed offers a far richer framework and ensures that at each level of the tree, the marginal distribution of the partition is given by a DP partition model. 4.2 Time-varying matrix factorization Let X1:n be an observed matrix of dimension n × D. We want to find a representation of this matrix in terms of two latent matrices Z1:n of dimension n × K and Y of dimension K × D. Here Z1:n 2 is a binary matrix whereas Y is a matrix of latent features. By assuming that Y ∼ N 0, σY IK×D and 2 X1:n = Z1:n Y + σX εn where εn ∼ N 0, σX In×D , we obtain p(X1:n |Z1:n ) ∝ −D/2 2 2 + Z+T Z+ + σX /σY IKn 1:n 1:n + (n−Kn )D σX exp − + Kn D σY 2 2 + where Σ−1 = I − Z+ Z+T Z+ + σX /σY IKn n 1:n 1:n 1:n −1 1 T −1 2 tr X1:n Σn X1:n 2σX (17) + Z+T , Kn the number of non-zero columns of 1:n + Z1:n and Z+ is the first Kn columns of Z1:n . To avoid having to set K, [5, 14] assume that Z1:n 1:n follows an IBP. The resulting posterior distribution p(Z1:n |X1:n ) can be estimated through MCMC [5] or SMC [14]. We consider here a different model where the object Xt is assumed to arrive at time index t and we want a prior distribution on Z1:n ensuring that objects close in time are more likely to possess similar features. To achieve this, we consider the simple directed graphical model D of Fig. 3 where the site numbering corresponds to a time index in that case and a perfect numbering of D is (1, 2, . . .). The set of parents pa(t) is composed of the r preceding sites {{t − r}, . . . , {t − 1}}. The time-varying IBP to sample from p(Z1:n ) associated to this directed graph follows from (16) and proceeds as follows. At time t = 1 + new new • Sample K1 ∼Poisson(α), set z1,i = 1 for i = 1, ..., K1 and set K1 = Knew . At times t = 2, . . . , r n + new ∼Poisson( α ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( 1:t−1,k ) and Kt t t 7 ? ? - t−r - t−r+1 - . . . - t−1 - t - t+1 6 6 Figure 3: Directed graph. At times t = r + 1, . . . , n n + α new ∼Poisson( r+1 ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( t−r:t−1,k ) and Kt r+1 + Here Kt is the total number of features appearing from time max(1, t − r) to t − 1 and nt−r:t−1,k the restriction of n1:t−1 to the r last customers. Using (17) and the prior distribution of Z1:n which can be sampled using the time-varying IBP described above, we can easily design an SMC method to sample from p(Z1:n |X1:n ). We do not detail it here. Note that contrary to [14], our algorithm does not require inverting a matrix whose dimension grows linearly with the size of the data but only a matrix of dimension r × r. In order to illustrate the model and SMC algorithm, we create 200 6 × 6 images using a ground truth Y consisting of 4 different 6 × 6 latent images. The 200 × 4 binary matrix was generated from Pr(zt,k = 1) = πt,k , where πt = ( .6 .5 0 0 ) if t = 1, . . . , 30, πt = ( .4 .8 .4 0 ) if t = 31, . . . , 50 and πt = ( 0 .3 .6 .6 ) if t = 51, . . . , 200. The order of the model is set to r = 50. The feature occurences Z1:n and true features Y and their estimates are represented in Figure 4. Two spurious features are detected by the model (features 2 and 5 on Fig. 3(c)) but quickly discarded (Fig. 4(d)). The algorithm is able to correctly estimate the varying prior occurences of the features over time. Feature1 Feature2 Feature1 Feature2 Feature3 20 20 40 40 60 60 Feature4 80 100 Feature4 Feature5 Feature6 Time Feature3 Time 80 100 120 120 140 140 160 160 180 200 180 1 2 3 200 4 Feature (a) 1 2 3 4 5 6 Feature (b) (c) (d) Figure 4: (a) True features, (b) True features occurences, (c) MAP estimate ZM AP and (d) associated E[Y|ZM AP ] t=20 t=50 t=20 t=50 t=100 t=200 t=100 t=200 (a) (b) Figure 5: (a) E[Xt |πt , Y] and (b) E[Xt |X1:t−1 ] at t = 20, 50, 100, 200. 5 Related work and Discussion The fixed-lag version of the time-varying DP of Caron et al. [1] is a special case of the proposed model when G is given by Fig. 3. The bivariate DP of Walker and Muliere [13] is also a special case when G has only two cliques. In this paper, we have assumed that the structure of the graph was known beforehand and we have shown that many flexible models arise from this framework. It would be interesting in the future to investigate the case where the graphical structure is unknown and must be estimated from the data. Acknowledgment The authors thank the reviewers for their comments that helped to improve the writing of the paper. 8 References [1] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In Uncertainty in Artificial Intelligence, 2007. [2] A.P. Dawid and S.L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21:1272–1317, 1993. [3] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995. [4] P. Fearnhead. Particle filters for mixture models with an unknown number of components. Statistics and Computing, 14:11–21, 2004. [5] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 2006. [6] D. Heinz. Building hyper dirichlet processes for graphical models. Electonic Journal of Statistics, 3:290–315, 2009. [7] J.F.C. Kingman. Random partitions in population genetics. Proceedings of the Royal Society of London, 361:1–20, 1978. [8] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [9] P. M¨ ller, F. Quintana, and G. Rosner. A method for combining inference across related nonu parametric Bayesian models. Journal of the Royal Statistical Society B, 66:735–749, 2004. [10] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [11] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability theory and related fields, 102:145–158, 1995. [12] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581, 2006. [13] S. Walker and P. Muliere. A bivariate Dirichlet process. Statistics and Probability Letters, 64:1–7, 2003. [14] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems, 2007. 9
5 0.1136813 46 nips-2009-Bilinear classifiers for visual recognition
Author: Hamed Pirsiavash, Deva Ramanan, Charless C. Fowlkes
Abstract: We describe an algorithm for learning bilinear SVMs. Bilinear classifiers are a discriminative variant of bilinear models, which capture the dependence of data on multiple factors. Such models are particularly appropriate for visual data that is better represented as a matrix or tensor, rather than a vector. Matrix encodings allow for more natural regularization through rank restriction. For example, a rank-one scanning-window classifier yields a separable filter. Low-rank models have fewer parameters and so are easier to regularize and faster to score at run-time. We learn low-rank models with bilinear classifiers. We also use bilinear classifiers for transfer learning by sharing linear factors between different classification tasks. Bilinear classifiers are trained with biconvex programs. Such programs are optimized with coordinate descent, where each coordinate step requires solving a convex program - in our case, we use a standard off-the-shelf SVM solver. We demonstrate bilinear SVMs on difficult problems of people detection in video sequences and action classification of video sequences, achieving state-of-the-art results in both. 1
6 0.086852364 95 nips-2009-Fast subtree kernels on graphs
7 0.082363315 20 nips-2009-A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers
8 0.08050701 224 nips-2009-Sparse and Locally Constant Gaussian Graphical Models
9 0.080313317 120 nips-2009-Kernels and learning curves for Gaussian process regression on random graphs
10 0.079550184 256 nips-2009-Which graphical models are difficult to learn?
11 0.078638576 141 nips-2009-Local Rules for Global MAP: When Do They Work ?
12 0.078040786 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models
13 0.077434756 248 nips-2009-Toward Provably Correct Feature Selection in Arbitrary Domains
14 0.076952979 168 nips-2009-Non-stationary continuous dynamic Bayesian networks
15 0.07320784 225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes
16 0.071228608 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference
17 0.065400653 148 nips-2009-Matrix Completion from Power-Law Distributed Samples
18 0.063252762 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior
19 0.057222836 87 nips-2009-Exponential Family Graph Matching and Ranking
20 0.056233291 187 nips-2009-Particle-based Variational Inference for Continuous Systems
topicId topicWeight
[(0, -0.178), (1, 0.023), (2, -0.017), (3, -0.028), (4, 0.01), (5, -0.122), (6, 0.106), (7, -0.012), (8, -0.057), (9, -0.151), (10, -0.095), (11, 0.035), (12, 0.051), (13, -0.021), (14, -0.017), (15, 0.042), (16, -0.01), (17, -0.032), (18, -0.033), (19, -0.154), (20, -0.038), (21, 0.09), (22, 0.067), (23, 0.042), (24, -0.002), (25, 0.074), (26, 0.015), (27, -0.005), (28, 0.072), (29, -0.187), (30, 0.081), (31, -0.007), (32, 0.009), (33, 0.023), (34, -0.032), (35, 0.006), (36, -0.039), (37, -0.081), (38, -0.008), (39, -0.037), (40, 0.098), (41, -0.031), (42, 0.015), (43, 0.026), (44, 0.015), (45, 0.121), (46, 0.108), (47, 0.093), (48, 0.026), (49, 0.214)]
simIndex simValue paperId paperTitle
same-paper 1 0.93068916 23 nips-2009-Accelerating Bayesian Structural Inference for Non-Decomposable Gaussian Graphical Models
Author: Baback Moghaddam, Emtiyaz Khan, Kevin P. Murphy, Benjamin M. Marlin
Abstract: We make several contributions in accelerating approximate Bayesian structural inference for non-decomposable GGMs. Our first contribution is to show how to efficiently compute a BIC or Laplace approximation to the marginal likelihood of non-decomposable graphs using convex methods for precision matrix estimation. This optimization technique can be used as a fast scoring function inside standard Stochastic Local Search (SLS) for generating posterior samples. Our second contribution is a novel framework for efficiently generating large sets of high-quality graph topologies without performing local search. This graph proposal method, which we call “Neighborhood Fusion” (NF), samples candidate Markov blankets at each node using sparse regression techniques. Our third contribution is a hybrid method combining the complementary strengths of NF and SLS. Experimental results in structural recovery and prediction tasks demonstrate that NF and hybrid NF/SLS out-perform state-of-the-art local search methods, on both synthetic and real-world datasets, when realistic computational limits are imposed.
2 0.65891129 42 nips-2009-Bayesian Sparse Factor Models and DAGs Inference and Comparison
Author: Ricardo Henao, Ole Winther
Abstract: In this paper we present a novel approach to learn directed acyclic graphs (DAGs) and factor models within the same framework while also allowing for model comparison between them. For this purpose, we exploit the connection between factor models and DAGs to propose Bayesian hierarchies based on spike and slab priors to promote sparsity, heavy-tailed priors to ensure identifiability and predictive densities to perform the model comparison. We require identifiability to be able to produce variable orderings leading to valid DAGs and sparsity to learn the structures. The effectiveness of our approach is demonstrated through extensive experiments on artificial and biological data showing that our approach outperform a number of state of the art methods. 1
3 0.60941738 148 nips-2009-Matrix Completion from Power-Law Distributed Samples
Author: Raghu Meka, Prateek Jain, Inderjit S. Dhillon
Abstract: The low-rank matrix completion problem is a fundamental problem with many important applications. Recently, [4],[13] and [5] obtained the first non-trivial theoretical results for the problem assuming that the observed entries are sampled uniformly at random. Unfortunately, most real-world datasets do not satisfy this assumption, but instead exhibit power-law distributed samples. In this paper, we propose a graph theoretic approach to matrix completion that solves the problem for more realistic sampling models. Our method is simpler to analyze than previous methods with the analysis reducing to computing the threshold for complete cascades in random graphs, a problem of independent interest. By analyzing the graph theoretic problem, we show that our method achieves exact recovery when the observed entries are sampled from the Chung-Lu-Vu model, which can generate power-law distributed graphs. We also hypothesize that our algorithm solves the matrix completion problem from an optimal number of entries for the popular preferential attachment model and provide strong empirical evidence for the claim. Furthermore, our method is easy to implement and is substantially faster than existing methods. We demonstrate the effectiveness of our method on random instances where the low-rank matrix is sampled according to the prevalent random graph models for complex networks and present promising preliminary results on the Netflix challenge dataset. 1
4 0.60015446 100 nips-2009-Gaussian process regression with Student-t likelihood
Author: Jarno Vanhatalo, Pasi Jylänki, Aki Vehtari
Abstract: In the Gaussian process regression the observation model is commonly assumed to be Gaussian, which is convenient in computational perspective. However, the drawback is that the predictive accuracy of the model can be significantly compromised if the observations are contaminated by outliers. A robust observation model, such as the Student-t distribution, reduces the influence of outlying observations and improves the predictions. The problem, however, is the analytically intractable inference. In this work, we discuss the properties of a Gaussian process regression model with the Student-t likelihood and utilize the Laplace approximation for approximate inference. We compare our approach to a variational approximation and a Markov chain Monte Carlo scheme, which utilize the commonly used scale mixture representation of the Student-t distribution. 1
5 0.58739495 143 nips-2009-Localizing Bugs in Program Executions with Graphical Models
Author: Laura Dietz, Valentin Dallmeier, Andreas Zeller, Tobias Scheffer
Abstract: We devise a graphical model that supports the process of debugging software by guiding developers to code that is likely to contain defects. The model is trained using execution traces of passing test runs; it reflects the distribution over transitional patterns of code positions. Given a failing test case, the model determines the least likely transitional pattern in the execution trace. The model is designed such that Bayesian inference has a closed-form solution. We evaluate the Bernoulli graph model on data of the software projects AspectJ and Rhino. 1
6 0.55729187 256 nips-2009-Which graphical models are difficult to learn?
7 0.49256968 141 nips-2009-Local Rules for Global MAP: When Do They Work ?
8 0.47512522 224 nips-2009-Sparse and Locally Constant Gaussian Graphical Models
9 0.46885493 56 nips-2009-Conditional Neural Fields
10 0.46305439 168 nips-2009-Non-stationary continuous dynamic Bayesian networks
11 0.44326395 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs
12 0.42751884 120 nips-2009-Kernels and learning curves for Gaussian process regression on random graphs
13 0.42248604 170 nips-2009-Nonlinear directed acyclic structure learning with weakly additive noise models
14 0.40043277 46 nips-2009-Bilinear classifiers for visual recognition
15 0.39428484 152 nips-2009-Measuring model complexity with the prior predictive
16 0.39350227 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models
17 0.39284682 59 nips-2009-Construction of Nonparametric Bayesian Models from Parametric Bayes Equations
18 0.37172547 89 nips-2009-FACTORIE: Probabilistic Programming via Imperatively Defined Factor Graphs
19 0.35870573 87 nips-2009-Exponential Family Graph Matching and Ranking
20 0.35255903 188 nips-2009-Perceptual Multistability as Markov Chain Monte Carlo Inference
topicId topicWeight
[(7, 0.012), (21, 0.014), (24, 0.033), (25, 0.085), (35, 0.11), (36, 0.093), (39, 0.049), (58, 0.069), (61, 0.017), (71, 0.05), (81, 0.021), (86, 0.065), (96, 0.288)]
simIndex simValue paperId paperTitle
same-paper 1 0.77640724 23 nips-2009-Accelerating Bayesian Structural Inference for Non-Decomposable Gaussian Graphical Models
Author: Baback Moghaddam, Emtiyaz Khan, Kevin P. Murphy, Benjamin M. Marlin
Abstract: We make several contributions in accelerating approximate Bayesian structural inference for non-decomposable GGMs. Our first contribution is to show how to efficiently compute a BIC or Laplace approximation to the marginal likelihood of non-decomposable graphs using convex methods for precision matrix estimation. This optimization technique can be used as a fast scoring function inside standard Stochastic Local Search (SLS) for generating posterior samples. Our second contribution is a novel framework for efficiently generating large sets of high-quality graph topologies without performing local search. This graph proposal method, which we call “Neighborhood Fusion” (NF), samples candidate Markov blankets at each node using sparse regression techniques. Our third contribution is a hybrid method combining the complementary strengths of NF and SLS. Experimental results in structural recovery and prediction tasks demonstrate that NF and hybrid NF/SLS out-perform state-of-the-art local search methods, on both synthetic and real-world datasets, when realistic computational limits are imposed.
2 0.76554316 146 nips-2009-Manifold Regularization for SIR with Rate Root-n Convergence
Author: Wei Bian, Dacheng Tao
Abstract: In this paper, we study the manifold regularization for the Sliced Inverse Regression (SIR). The manifold regularization improves the standard SIR in two aspects: 1) it encodes the local geometry for SIR and 2) it enables SIR to deal with transductive and semi-supervised learning problems. We prove that the proposed graph Laplacian based regularization is convergent at rate root-n. The projection directions of the regularized SIR are optimized by using a conjugate gradient method on the Grassmann manifold. Experimental results support our theory.
3 0.72835523 62 nips-2009-Correlation Coefficients are Insufficient for Analyzing Spike Count Dependencies
Author: Arno Onken, Steffen Grünewälder, Klaus Obermayer
Abstract: The linear correlation coefficient is typically used to characterize and analyze dependencies of neural spike counts. Here, we show that the correlation coefficient is in general insufficient to characterize these dependencies. We construct two neuron spike count models with Poisson-like marginals and vary their dependence structure using copulas. To this end, we construct a copula that allows to keep the spike counts uncorrelated while varying their dependence strength. Moreover, we employ a network of leaky integrate-and-fire neurons to investigate whether weakly correlated spike counts with strong dependencies are likely to occur in real networks. We find that the entropy of uncorrelated but dependent spike count distributions can deviate from the corresponding distribution with independent components by more than 25 % and that weakly correlated but strongly dependent spike counts are very likely to occur in biological networks. Finally, we introduce a test for deciding whether the dependence structure of distributions with Poissonlike marginals is well characterized by the linear correlation coefficient and verify it for different copula-based models. 1
4 0.71147025 154 nips-2009-Modeling the spacing effect in sequential category learning
Author: Hongjing Lu, Matthew Weiden, Alan L. Yuille
Abstract: We develop a Bayesian sequential model for category learning. The sequential model updates two category parameters, the mean and the variance, over time. We define conjugate temporal priors to enable closed form solutions to be obtained. This model can be easily extended to supervised and unsupervised learning involving multiple categories. To model the spacing effect, we introduce a generic prior in the temporal updating stage to capture a learning preference, namely, less change for repetition and more change for variation. Finally, we show how this approach can be generalized to efficiently perform model selection to decide whether observations are from one or multiple categories.
5 0.67194092 94 nips-2009-Fast Learning from Non-i.i.d. Observations
Author: Ingo Steinwart, Andreas Christmann
Abstract: We prove an oracle inequality for generic regularized empirical risk minimization algorithms learning from α-mixing processes. To illustrate this oracle inequality, we use it to derive learning rates for some learning methods including least squares SVMs. Since the proof of the oracle inequality uses recent localization ideas developed for independent and identically distributed (i.i.d.) processes, it turns out that these learning rates are close to the optimal rates known in the i.i.d. case. 1
6 0.56342906 34 nips-2009-Anomaly Detection with Score functions based on Nearest Neighbor Graphs
7 0.56305832 219 nips-2009-Slow, Decorrelated Features for Pretraining Complex Cell-like Networks
8 0.55765611 131 nips-2009-Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition
9 0.55633718 28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition
10 0.55572677 113 nips-2009-Improving Existing Fault Recovery Policies
11 0.55057275 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction
12 0.54737878 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling
13 0.54639351 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference
14 0.54635632 168 nips-2009-Non-stationary continuous dynamic Bayesian networks
15 0.54576236 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA
16 0.54554838 97 nips-2009-Free energy score space
17 0.54454732 30 nips-2009-An Integer Projected Fixed Point Method for Graph Matching and MAP Inference
18 0.54433364 167 nips-2009-Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations
19 0.54298711 100 nips-2009-Gaussian process regression with Student-t likelihood
20 0.54203135 42 nips-2009-Bayesian Sparse Factor Models and DAGs Inference and Comparison