nips nips2010 nips2010-67 knowledge-graph by maker-knowledge-mining

67 nips-2010-Dynamic Infinite Relational Model for Time-varying Relational Data Analysis


Source: pdf

Author: Katsuhiko Ishiguro, Tomoharu Iwata, Naonori Ueda, Joshua B. Tenenbaum

Abstract: We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks. Our proposed model abstracts observed timevarying object-object relationships into relationships between object clusters. We extend the infinite Hidden Markov model to follow dynamic and time-sensitive changes in the structure of the relational data and to estimate a number of clusters simultaneously. We show the usefulness of the model through experiments with synthetic and real-world data sets.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks. [sent-7, score-0.797]

2 Our proposed model abstracts observed timevarying object-object relationships into relationships between object clusters. [sent-8, score-0.12]

3 We extend the infinite Hidden Markov model to follow dynamic and time-sensitive changes in the structure of the relational data and to estimate a number of clusters simultaneously. [sent-9, score-0.644]

4 Many statistical models for relational data have been presented [10, 1, 18]. [sent-12, score-0.35]

5 The stochastic block model (SBM) [11] and the infinite relational model (IRM) [8] partition objects into clusters so that the relations between clusters abstract the relations between objects well. [sent-13, score-0.878]

6 SBM requires specifying the number of clusters in advance, while IRM automatically estimates the number of clusters. [sent-14, score-0.19]

7 Similarly, the mixed membership model [2] associates each object with multiple clusters (roles) rather than a single cluster. [sent-15, score-0.286]

8 However, a large amount of relational data in the real world is time-varying. [sent-17, score-0.35]

9 Recently some researchers have investigated the dynamics in relational data. [sent-23, score-0.35]

10 They assumed a transition probability matrix like HMM, which governs all the cluster assignments of objects for all time steps. [sent-28, score-0.437]

11 Thus, it cannot represent more complicated time variations such as split & merge of clusters that only occur temporarily. [sent-30, score-0.324]

12 This model is very general for time series relational data modeling, and is good for tracking gradual and continuous changes of the relationships. [sent-34, score-0.421]

13 In addition, previous models assume the number of clusters is fixed and known, which is difficult to determine a priori. [sent-37, score-0.19]

14 1 In this paper we propose yet another time-varying relational data model that deals with temporal and dynamic changes of cluster structures such as additions, deletions and split & merge of clusters. [sent-38, score-0.851]

15 Instead of the continuous world view of [4], we assume a discrete structure: distinct clusters with discrete transitions over time, allowing for birth, death and split & merge dynamics. [sent-39, score-0.33]

16 More specifically, we extend IRM for time-varying relational data by using a variant of the infinite HMM (iHMM) [15, 3]. [sent-40, score-0.35]

17 By incorporating the idea of iHMM, our model is able to infer clusters of objects without specifying a number of clusters in advance. [sent-41, score-0.454]

18 This specific form of iHMM enables the model to represent time-sensitive dynamic properties such as split & merge of clusters. [sent-43, score-0.167]

19 2 Infinite Relational Model We first explain the infinite relational model (IRM) [8], which can estimate the number of hidden clusters from a relational data. [sent-45, score-0.918]

20 In IRM, Dirichlet process (DP) is used as a prior for clusters of an unknown number, and is denoted as DP(γ, G0 ) where γ > 0 is a parameter and G0 is a base measure. [sent-46, score-0.19]

21 The IRM is an application of the DP for relational data. [sent-54, score-0.35]

22 The IRM divides the set of N objects into multiple clusters based on the observed relational data X = {xi, j ∈ {0, 1}; 1 ≤ i, j ≤ N}. [sent-60, score-0.614]

23 The IRM is able to infer the number of clusters at the same time because it uses DP as a prior distribution of the cluster partition. [sent-61, score-0.473]

24 We k,l=1 sample a cluster index of the object i, zi = k, k ∈ {1, 2, . [sent-74, score-0.355]

25 (4) ηk,l is the strength of a relation between the objects in clusters k and l. [sent-80, score-0.295]

26 Generating the observed relational data xi, j follows Eq. [sent-81, score-0.35]

27 (5) conditioned by the cluster assignments Z and the strengths H. [sent-82, score-0.289]

28 1 Dynamic Infinite Relational Model (dIRM) Time-varying relational data First, we define the time-varying relational data considered in }this paper. [sent-84, score-0.7]

29 Time-varying relational { data X have three subscripts t, i, and j: X = xt,i, j ∈ {0, 1} , where i, j ∈ {1, 2, . [sent-85, score-0.35]

30 We assume that there is no relation between objects belonging to a different time step t and t . [sent-94, score-0.161]

31 The time-varying relational data X is a set of T (static) relational data for T time steps. [sent-95, score-0.729]

32 It is natural to assume that every object transits between different clusters along with the time evolution. [sent-101, score-0.286]

33 Observing several real world time-varying relational data, we assume there are several properties of transitions, as follows: • P1. [sent-102, score-0.35]

34 Time evolutions of clusters are not stationary nor uniform. [sent-105, score-0.258]

35 The number of clusters is time-varying and unknown a priori. [sent-107, score-0.19]

36 P1 is a common assumption for many kinds of time series data, not limited to relational data. [sent-108, score-0.379]

37 P2 tries to model occasional and drastic changes from frequent and minor modifications in relational networks. [sent-111, score-0.417]

38 This will cause an addition and deletion of a user cluster (community). [sent-115, score-0.254]

39 We first consider several straightforward solutions based on the IRM for analyzing time-varying relational data. [sent-119, score-0.35]

40 ˜ The simplest way is to convert time-varying relational data X into “static” relational data X = { xi, j } ˜ ˜ ˜ and apply the IRM to X. [sent-120, score-0.7]

41 This solution cannot represent the time changes of clustering because it assume the same clustering results for all the time steps (z1,i = z2,i = · · · = zT,i ). [sent-122, score-0.196]

42 We may separate the time-varying relational data X into a series of time step-wise relational data Xt and apply the IRM for each Xt . [sent-123, score-0.729]

43 Since β is shared over all time steps, we may expect that the clustering results between time steps will have higher correlations. [sent-129, score-0.12]

44 This implies that the tIRM is not suitable for modeling time evolutions since the order of time steps are ignored in the model. [sent-131, score-0.154]

45 3 dynamic IRM To address three conditions P1∼3 above, we propose a new probabilistic model called the dynamic infinite relational model (dIRM). [sent-133, score-0.474]

46 (12) is a transition probability that an object remaining in the cluster k ∈ {1, 2, . [sent-150, score-0.339]

47 } at time t − 1 will move to the cluster l ∈ {1, 2, . [sent-153, score-0.283]

48 This implies that this DP encourages the self-transitions of objects, and we can achieve the property P1 for time-varying relational data. [sent-164, score-0.35]

49 πt,k is sampled for every time step t, thus, we can model time-varying patterns of transitions, including additions, deletions and split & merge of clusters as extreme cases. [sent-166, score-0.362]

50 These changes happen only temporarily, therefore, time-dependent transition probabilities are indispensable for our purpose. [sent-167, score-0.114]

51 Note that the transition probability is also dependent on the cluster index k, as in conventional iHMMs. [sent-168, score-0.326]

52 Also the dIRM can automatically determine the number of clusters thanks to DP: this enables us to hold P3. [sent-169, score-0.19]

53 Equation (13) generates a cluster assignment for the object i at time t, based on the cluster, where the object was previously (zt−1,i ) and its transition probability π. [sent-170, score-0.408]

54 Equation (14) generates a strength parameter η for the pair of clusters k and l, then we obtain the observed sample xt,i, j in Eq. [sent-171, score-0.19]

55 Thus, we may interpret the dIRM as an extension of the iHMM, which has N (= a number of objects) hidden sequences to handle relational data. [sent-176, score-0.378]

56 Given U, the number of clusters can be reduced to a finite number during the inference, and it enables us an efficient sampling of variables. [sent-179, score-0.221]

57 (20) Because of I(u < π), values of cluster indices k are limited within a finite set. [sent-197, score-0.277]

58 First β is assumed as a K + 1-dimensional vector (mixing ratios ∑K of unrepresented clusters are aggregated in βK+1 = 1 − k=1 βk ). [sent-201, score-0.19]

59 To apply the IRM to time-varying relational data, we use Eq. [sent-222, score-0.35]

60 To synthesize datasets, we first determined the number of time steps T , the number of clusters K, and the number of objects N. [sent-230, score-0.321]

61 Next, we manually assigned zt,i in order to obtain cluster split & merge, additions, and deletions. [sent-231, score-0.285]

62 After obtaining Z, we defined the connection strengths between clusters H = {ηk,l }. [sent-232, score-0.19]

63 IOtables summarize the transactions of goods and services between industrial sectors. [sent-240, score-0.141]

64 Each element in the matrix ei, j represents that one unit of demand in the jth sector invokes ei, j productions in the ith sector. [sent-242, score-0.113]

65 Thus we obtain a time-varying relational data of N = 32 and T = 5. [sent-245, score-0.35]

66 Differences in the number of realized clusters ˆ were computed between Zt and Zt , and we calculated the average of these errors for T steps. [sent-262, score-0.19]

67 Particularly, dIRM showed good results in the Synth2 and Enron datasets, where the changes in relationships are highly dynamic and unstable. [sent-300, score-0.144]

68 Thus we can say that the dIRM is superior in modeling time-varying relational data, especially for dynamic ones. [sent-302, score-0.412]

69 The panel (a) illustrates the estimated ηk,l using the dIRM, and the panel (b) presents the time evolution of cluster assignments, respectively. [sent-305, score-0.283]

70 For example, dIRM groups the machine industries into cluster 5, and infrastructure related industries are grouped into cluster 13. [sent-308, score-0.602]

71 For example, demands for machine industries (cluster 5) will cause large productions for “iron and steel” sector (cluster 7). [sent-312, score-0.16]

72 However, the sector transits to cluster 1 afterwards, which does not connect strongly with clusters 5 and 7. [sent-317, score-0.553]

73 Next, the “transport” sector enlarges its roll in the market by moving to cluster 14, and it causes the deletion of cluster 8. [sent-319, score-0.59]

74 From 1985 to 2000, this sector is in the cluster 9 which is rather independent from other clusters. [sent-321, score-0.336]

75 However, in 2005 the cluster separated, and telecom industry merged with cluster 1, which is a influential cluster. [sent-322, score-0.604]

76 Figure 4 (a) tells us that clusters 1 ∼ 7 are relatively separated communities. [sent-326, score-0.19]

77 For example, members in cluster 4 belong to a restricted domain business such energy, gas, or pipeline businesses. [sent-327, score-0.254]

78 Cluster 5 is a community of financial and monetary departments, and cluster 7 is a community of managers such as vice presidents, and CFOs. [sent-328, score-0.338]

79 One interesting result from the dIRM is finding cluster 9. [sent-329, score-0.254]

80 This cluster notably sends many messages to other clusters, especially for management cluster 7. [sent-330, score-0.508]

81 The number of objects belonging to this cluster is only three throughout the time steps, but these members are the key-persons at that time. [sent-331, score-0.384]

82 6 Cluster 7 iron and steel iron and steel iron and steel iron and steel iron and steel 0. [sent-333, score-0.875]

83 1 14 1 0 2 3 4 5 6 7 8 9 10 11 12 13 14 l (b) (a) Figure 3: (a) Example of estimated ηk,l (strength of relationship between clusters k, l) for IOtable data by dIRM. [sent-341, score-0.19]

84 (d) Time-varying clustering assignments for selected clusters by dIRM. [sent-342, score-0.259]

85 dIRM: Learned ηkl for Enron data “Inactive” object cluster CEO of Enron America The founder COO (a) (b) Figure 4: (a): Example of estimated ηk,l for Enron dataset using dIRM. [sent-343, score-0.369]

86 (b): Number of items belonging to clusters at each time step for Enron dataset using dIRM. [sent-344, score-0.28]

87 First, the CEO of Enron America stayed at cluster 9 in May (t = 5). [sent-345, score-0.254]

88 Next, the founder of Enron was a member of the cluster in August t = 8. [sent-346, score-0.295]

89 Finally, the COO belongs to the cluster in October t = 10. [sent-348, score-0.254]

90 4 (b) presents the time evolutions of the cluster memberships; i. [sent-351, score-0.351]

91 the number of objects belonging to each cluster at each time step. [sent-353, score-0.384]

92 For example, the volume of cluster 6 (inactive cluster) decreases as time evolves. [sent-356, score-0.283]

93 On the contrary, cluster 4 is stable in membership. [sent-358, score-0.277]

94 6 Conclusions We proposed a new time-varying relational data model that is able to represent dynamic changes of cluster structures. [sent-361, score-0.708]

95 The dynamic IRM (dIRM) model incorporates a variant of the iHMM model and represents time-sensitive dynamic properties such as split & merge of clusters. [sent-362, score-0.229]

96 Experiments with synthetic and real-world time series datasets showed that the proposed model improves the precision of time-varying relational data analysis. [sent-364, score-0.445]

97 Learning systems of concepts with an infinite relational model. [sent-426, score-0.35]

98 The enron corpus: A new dataset for email classification research. [sent-431, score-0.31]

99 A Bayesian approach toward finding communities and their evolutions in dynamic social networks. [sent-482, score-0.157]

100 Stochastic relational models for large-scale dyadic data using mcmc. [sent-494, score-0.35]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('dirm', 0.436), ('relational', 0.35), ('enron', 0.276), ('irm', 0.276), ('cluster', 0.254), ('iotables', 0.218), ('machinery', 0.213), ('clusters', 0.19), ('tirm', 0.156), ('transport', 0.15), ('services', 0.141), ('zt', 0.137), ('gas', 0.096), ('telecom', 0.096), ('ihmm', 0.093), ('iron', 0.093), ('steel', 0.082), ('broadcast', 0.082), ('enterprise', 0.082), ('sector', 0.082), ('dp', 0.079), ('merge', 0.074), ('objects', 0.074), ('commerce', 0.071), ('insurance', 0.071), ('disposal', 0.068), ('evolutions', 0.068), ('rand', 0.064), ('petroleum', 0.063), ('waste', 0.063), ('dynamic', 0.062), ('consumer', 0.059), ('finance', 0.059), ('electric', 0.053), ('trades', 0.053), ('di', 0.05), ('additions', 0.05), ('beta', 0.05), ('water', 0.048), ('ceo', 0.047), ('ihmms', 0.047), ('industries', 0.047), ('powers', 0.046), ('transition', 0.045), ('deleted', 0.045), ('community', 0.042), ('changes', 0.042), ('founder', 0.041), ('nxn', 0.041), ('inverted', 0.041), ('object', 0.04), ('relationships', 0.04), ('electronic', 0.039), ('deletions', 0.038), ('datasets', 0.037), ('stick', 0.037), ('slice', 0.036), ('mining', 0.036), ('assignments', 0.035), ('erence', 0.035), ('month', 0.035), ('transitions', 0.035), ('clustering', 0.034), ('dataset', 0.034), ('zi', 0.034), ('erent', 0.033), ('membership', 0.033), ('split', 0.031), ('coo', 0.031), ('imoto', 0.031), ('productions', 0.031), ('sectors', 0.031), ('yoshida', 0.031), ('sampling', 0.031), ('relation', 0.031), ('precision', 0.029), ('time', 0.029), ('hidden', 0.028), ('fox', 0.028), ('steps', 0.028), ('indispensable', 0.027), ('memberships', 0.027), ('sbm', 0.027), ('transits', 0.027), ('index', 0.027), ('belonging', 0.027), ('social', 0.027), ('ei', 0.025), ('departments', 0.025), ('drastic', 0.025), ('nite', 0.025), ('dirichlet', 0.024), ('hyperlink', 0.024), ('america', 0.023), ('mixed', 0.023), ('stable', 0.023), ('hyperparameters', 0.023), ('indices', 0.023), ('posteriors', 0.022), ('inactive', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 67 nips-2010-Dynamic Infinite Relational Model for Time-varying Relational Data Analysis

Author: Katsuhiko Ishiguro, Tomoharu Iwata, Naonori Ueda, Joshua B. Tenenbaum

Abstract: We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks. Our proposed model abstracts observed timevarying object-object relationships into relationships between object clusters. We extend the infinite Hidden Markov model to follow dynamic and time-sensitive changes in the structure of the relational data and to estimate a number of clusters simultaneously. We show the usefulness of the model through experiments with synthetic and real-world data sets.

2 0.17906117 71 nips-2010-Efficient Relational Learning with Hidden Variable Detection

Author: Ni Lao, Jun Zhu, Liu Xinwang, Yandong Liu, William W. Cohen

Abstract: Markov networks (MNs) can incorporate arbitrarily complex features in modeling relational data. However, this flexibility comes at a sharp price of training an exponentially complex model. To address this challenge, we propose a novel relational learning approach, which consists of a restricted class of relational MNs (RMNs) called relation tree-based RMN (treeRMN), and an efficient Hidden Variable Detection algorithm called Contrastive Variable Induction (CVI). On one hand, the restricted treeRMN only considers simple (e.g., unary and pairwise) features in relational data and thus achieves computational efficiency; and on the other hand, the CVI algorithm efficiently detects hidden variables which can capture long range dependencies. Therefore, the resultant approach is highly efficient yet does not sacrifice its expressive power. Empirical results on four real datasets show that the proposed relational learning method can achieve similar prediction quality as the state-of-the-art approaches, but is significantly more efficient in training; and the induced hidden variables are semantically meaningful and crucial to improve the training speed and prediction qualities of treeRMNs.

3 0.16731851 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

Author: Morten Mørup, Kristoffer Madsen, Anne-marie Dogonowski, Hartwig Siebner, Lars K. Hansen

Abstract: Functional magnetic resonance imaging (fMRI) can be applied to study the functional connectivity of the neural elements which form complex network at a whole brain level. Most analyses of functional resting state networks (RSN) have been based on the analysis of correlation between the temporal dynamics of various regions of the brain. While these models can identify coherently behaving groups in terms of correlation they give little insight into how these groups interact. In this paper we take a different view on the analysis of functional resting state networks. Starting from the definition of resting state as functional coherent groups we search for functional units of the brain that communicate with other parts of the brain in a coherent manner as measured by mutual information. We use the infinite relational model (IRM) to quantify functional coherent groups of resting state networks and demonstrate how the extracted component interactions can be used to discriminate between functional resting state activity in multiple sclerosis and normal subjects. 1

4 0.13012308 83 nips-2010-Evidence-Specific Structures for Rich Tractable CRFs

Author: Anton Chechetka, Carlos Guestrin

Abstract: We present a simple and effective approach to learning tractable conditional random fields with structure that depends on the evidence. Our approach retains the advantages of tractable discriminative models, namely efficient exact inference and arbitrarily accurate parameter learning in polynomial time. At the same time, our algorithm does not suffer a large expressive power penalty inherent to fixed tractable structures. On real-life relational datasets, our approach matches or exceeds state of the art accuracy of the dense models, and at the same time provides an order of magnitude speedup. 1

5 0.11033624 155 nips-2010-Learning the context of a category

Author: Dan Navarro

Abstract: This paper outlines a hierarchical Bayesian model for human category learning that learns both the organization of objects into categories, and the context in which this knowledge should be applied. The model is fit to multiple data sets, and provides a parsimonious method for describing how humans learn context specific conceptual representations.

6 0.10604835 223 nips-2010-Rates of convergence for the cluster tree

7 0.10535743 261 nips-2010-Supervised Clustering

8 0.090973109 55 nips-2010-Cross Species Expression Analysis using a Dirichlet Process Mixture Model with Latent Matchings

9 0.084581144 230 nips-2010-Robust Clustering as Ensembles of Affinity Relations

10 0.071625933 276 nips-2010-Tree-Structured Stick Breaking for Hierarchical Data

11 0.066160716 273 nips-2010-Towards Property-Based Classification of Clustering Paradigms

12 0.064037994 51 nips-2010-Construction of Dependent Dirichlet Processes based on Poisson Processes

13 0.056058411 62 nips-2010-Discriminative Clustering by Regularized Information Maximization

14 0.055980809 171 nips-2010-Movement extraction by detecting dynamics switches and repetitions

15 0.055860374 242 nips-2010-Slice sampling covariance hyperparameters of latent Gaussian models

16 0.052851129 228 nips-2010-Reverse Multi-Label Learning

17 0.052647382 49 nips-2010-Computing Marginal Distributions over Continuous Markov Networks for Statistical Relational Learning

18 0.050932597 139 nips-2010-Latent Variable Models for Predicting File Dependencies in Large-Scale Software Development

19 0.041938275 222 nips-2010-Random Walk Approach to Regret Minimization

20 0.040670045 162 nips-2010-Link Discovery using Graph Feature Tracking


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.128), (1, 0.033), (2, -0.013), (3, 0.017), (4, -0.12), (5, -0.034), (6, 0.015), (7, -0.084), (8, 0.088), (9, 0.031), (10, -0.015), (11, -0.022), (12, 0.034), (13, -0.114), (14, 0.168), (15, -0.045), (16, 0.05), (17, 0.062), (18, 0.014), (19, -0.048), (20, 0.003), (21, 0.124), (22, 0.086), (23, 0.009), (24, -0.149), (25, 0.083), (26, -0.021), (27, -0.013), (28, -0.04), (29, 0.047), (30, -0.153), (31, -0.097), (32, -0.106), (33, -0.003), (34, -0.151), (35, 0.005), (36, 0.034), (37, 0.132), (38, -0.087), (39, 0.031), (40, -0.08), (41, 0.018), (42, 0.005), (43, -0.037), (44, 0.001), (45, 0.113), (46, 0.113), (47, 0.071), (48, 0.039), (49, 0.143)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95153195 67 nips-2010-Dynamic Infinite Relational Model for Time-varying Relational Data Analysis

Author: Katsuhiko Ishiguro, Tomoharu Iwata, Naonori Ueda, Joshua B. Tenenbaum

Abstract: We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks. Our proposed model abstracts observed timevarying object-object relationships into relationships between object clusters. We extend the infinite Hidden Markov model to follow dynamic and time-sensitive changes in the structure of the relational data and to estimate a number of clusters simultaneously. We show the usefulness of the model through experiments with synthetic and real-world data sets.

2 0.78080475 71 nips-2010-Efficient Relational Learning with Hidden Variable Detection

Author: Ni Lao, Jun Zhu, Liu Xinwang, Yandong Liu, William W. Cohen

Abstract: Markov networks (MNs) can incorporate arbitrarily complex features in modeling relational data. However, this flexibility comes at a sharp price of training an exponentially complex model. To address this challenge, we propose a novel relational learning approach, which consists of a restricted class of relational MNs (RMNs) called relation tree-based RMN (treeRMN), and an efficient Hidden Variable Detection algorithm called Contrastive Variable Induction (CVI). On one hand, the restricted treeRMN only considers simple (e.g., unary and pairwise) features in relational data and thus achieves computational efficiency; and on the other hand, the CVI algorithm efficiently detects hidden variables which can capture long range dependencies. Therefore, the resultant approach is highly efficient yet does not sacrifice its expressive power. Empirical results on four real datasets show that the proposed relational learning method can achieve similar prediction quality as the state-of-the-art approaches, but is significantly more efficient in training; and the induced hidden variables are semantically meaningful and crucial to improve the training speed and prediction qualities of treeRMNs.

3 0.54588974 155 nips-2010-Learning the context of a category

Author: Dan Navarro

Abstract: This paper outlines a hierarchical Bayesian model for human category learning that learns both the organization of objects into categories, and the context in which this knowledge should be applied. The model is fit to multiple data sets, and provides a parsimonious method for describing how humans learn context specific conceptual representations.

4 0.53335911 83 nips-2010-Evidence-Specific Structures for Rich Tractable CRFs

Author: Anton Chechetka, Carlos Guestrin

Abstract: We present a simple and effective approach to learning tractable conditional random fields with structure that depends on the evidence. Our approach retains the advantages of tractable discriminative models, namely efficient exact inference and arbitrarily accurate parameter learning in polynomial time. At the same time, our algorithm does not suffer a large expressive power penalty inherent to fixed tractable structures. On real-life relational datasets, our approach matches or exceeds state of the art accuracy of the dense models, and at the same time provides an order of magnitude speedup. 1

5 0.45321673 55 nips-2010-Cross Species Expression Analysis using a Dirichlet Process Mixture Model with Latent Matchings

Author: Ziv Bar-joseph, Hai-son P. Le

Abstract: Recent studies compare gene expression data across species to identify core and species specific genes in biological systems. To perform such comparisons researchers need to match genes across species. This is a challenging task since the correct matches (orthologs) are not known for most genes. Previous work in this area used deterministic matchings or reduced multidimensional expression data to binary representation. Here we develop a new method that can utilize soft matches (given as priors) to infer both, unique and similar expression patterns across species and a matching for the genes in both species. Our method uses a Dirichlet process mixture model which includes a latent data matching variable. We present learning and inference algorithms based on variational methods for this model. Applying our method to immune response data we show that it can accurately identify common and unique response patterns by improving the matchings between human and mouse genes. 1

6 0.44262263 273 nips-2010-Towards Property-Based Classification of Clustering Paradigms

7 0.43855309 261 nips-2010-Supervised Clustering

8 0.42804319 62 nips-2010-Discriminative Clustering by Regularized Information Maximization

9 0.42598495 215 nips-2010-Probabilistic Deterministic Infinite Automata

10 0.42257676 230 nips-2010-Robust Clustering as Ensembles of Affinity Relations

11 0.39104006 159 nips-2010-Lifted Inference Seen from the Other Side : The Tractable Features

12 0.38454634 223 nips-2010-Rates of convergence for the cluster tree

13 0.37325591 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

14 0.36549878 120 nips-2010-Improvements to the Sequence Memoizer

15 0.36159986 121 nips-2010-Improving Human Judgments by Decontaminating Sequential Dependencies

16 0.34094131 139 nips-2010-Latent Variable Models for Predicting File Dependencies in Large-Scale Software Development

17 0.32461694 237 nips-2010-Shadow Dirichlet for Restricted Probability Modeling

18 0.30946636 51 nips-2010-Construction of Dependent Dirichlet Processes based on Poisson Processes

19 0.29415154 49 nips-2010-Computing Marginal Distributions over Continuous Markov Networks for Statistical Relational Learning

20 0.27396911 153 nips-2010-Learning invariant features using the Transformed Indian Buffet Process


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.045), (17, 0.01), (22, 0.015), (27, 0.107), (30, 0.045), (35, 0.019), (45, 0.149), (50, 0.042), (52, 0.013), (60, 0.065), (65, 0.323), (77, 0.037), (79, 0.012), (90, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.73876035 67 nips-2010-Dynamic Infinite Relational Model for Time-varying Relational Data Analysis

Author: Katsuhiko Ishiguro, Tomoharu Iwata, Naonori Ueda, Joshua B. Tenenbaum

Abstract: We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks. Our proposed model abstracts observed timevarying object-object relationships into relationships between object clusters. We extend the infinite Hidden Markov model to follow dynamic and time-sensitive changes in the structure of the relational data and to estimate a number of clusters simultaneously. We show the usefulness of the model through experiments with synthetic and real-world data sets.

2 0.56957561 174 nips-2010-Multi-label Multiple Kernel Learning by Stochastic Approximation: Application to Visual Object Recognition

Author: Serhat Bucak, Rong Jin, Anil K. Jain

Abstract: Recent studies have shown that multiple kernel learning is very effective for object recognition, leading to the popularity of kernel learning in computer vision problems. In this work, we develop an efficient algorithm for multi-label multiple kernel learning (ML-MKL). We assume that all the classes under consideration share the same combination of kernel functions, and the objective is to find the optimal kernel combination that benefits all the classes. Although several algorithms have been developed for ML-MKL, their computational cost is linear in the number of classes, making them unscalable when the number of classes is large, a challenge frequently encountered in visual object recognition. We address this computational challenge by developing a framework for ML-MKL that combines the worst-case analysis with stochastic approximation. Our analysis √ shows that the complexity of our algorithm is O(m1/3 lnm), where m is the number of classes. Empirical studies with object recognition show that while achieving similar classification accuracy, the proposed method is significantly more efficient than the state-of-the-art algorithms for ML-MKL. 1

3 0.53165537 161 nips-2010-Linear readout from a neural population with partial correlation data

Author: Adrien Wohrer, Ranulfo Romo, Christian K. Machens

Abstract: How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix grows quadratically with population size. Here, we suggest to bypass this problem by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On average, noise correlations will therefore reflect signal correlations, which can be measured in neural populations. We suggest an explicit parametric dependency between signal and noise correlations. We show how this dependency can be used to ”fill the gaps” in noise correlations matrices using an iterative application of the Wishart distribution over positive definitive matrices. We apply our method to data from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the population of recorded neurons with the discrimination threshold of the monkey and show that our method predicts different results than simpler, average schemes of noise correlations. 1

4 0.52519947 194 nips-2010-Online Learning for Latent Dirichlet Allocation

Author: Matthew Hoffman, Francis R. Bach, David M. Blei

Abstract: We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA). Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collections, including those arriving in a stream. We study the performance of online LDA in several ways, including by fitting a 100-topic topic model to 3.3M articles from Wikipedia in a single pass. We demonstrate that online LDA finds topic models as good or better than those found with batch VB, and in a fraction of the time. 1

5 0.52177602 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

Author: Ryan Kelly, Matthew Smith, Robert Kass, Tai S. Lee

Abstract: Activity of a neuron, even in the early sensory areas, is not simply a function of its local receptive field or tuning properties, but depends on global context of the stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity of a neuron. In this paper we implemented an L1 regularized point process model to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode “Utah” array. We found that the spikes of surrounding neurons indeed provide strong predictions of a neuron’s response, in addition to the neuron’s receptive field transfer function. We also found that the same spikes could be accounted for with the local field potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate and stimulus-response transfer functions. 1

6 0.52163267 98 nips-2010-Functional form of motion priors in human motion perception

7 0.52150929 268 nips-2010-The Neural Costs of Optimal Control

8 0.51926839 17 nips-2010-A biologically plausible network for the computation of orientation dominance

9 0.51832229 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

10 0.51418233 6 nips-2010-A Discriminative Latent Model of Image Region and Object Tag Correspondence

11 0.51351517 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

12 0.51241124 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

13 0.51111865 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

14 0.51084465 55 nips-2010-Cross Species Expression Analysis using a Dirichlet Process Mixture Model with Latent Matchings

15 0.51022297 150 nips-2010-Learning concept graphs from text with stick-breaking priors

16 0.51021832 39 nips-2010-Bayesian Action-Graph Games

17 0.51011318 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

18 0.50816458 276 nips-2010-Tree-Structured Stick Breaking for Hierarchical Data

19 0.5078041 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

20 0.50746953 155 nips-2010-Learning the context of a category