nips nips2005 nips2005-89 knowledge-graph by maker-knowledge-mining

89 nips-2005-Group and Topic Discovery from Relations and Their Attributes


Source: pdf

Author: Xuerui Wang, Natasha Mohanty, Andrew McCallum

Abstract: We present a probabilistic generative model of entity relationships and their attributes that simultaneously discovers groups among the entities and topics among the corresponding textual attributes. Block-models of relationship data have been studied in social network analysis for some time. Here we simultaneously cluster in several modalities at once, incorporating the attributes (here, words) associated with certain relationships. Significantly, joint inference allows the discovery of topics to be guided by the emerging groups, and vice-versa. We present experimental results on two large data sets: sixteen years of bills put before the U.S. Senate, comprising their corresponding text and voting records, and thirteen years of similar data from the United Nations. We show that in comparison with traditional, separate latent-variable models for words, or Blockstructures for votes, the Group-Topic model’s joint inference discovers more cohesive groups and improved topics. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We present a probabilistic generative model of entity relationships and their attributes that simultaneously discovers groups among the entities and topics among the corresponding textual attributes. [sent-3, score-1.096]

2 Here we simultaneously cluster in several modalities at once, incorporating the attributes (here, words) associated with certain relationships. [sent-5, score-0.161]

3 Significantly, joint inference allows the discovery of topics to be guided by the emerging groups, and vice-versa. [sent-6, score-0.387]

4 Senate, comprising their corresponding text and voting records, and thirteen years of similar data from the United Nations. [sent-9, score-0.251]

5 We show that in comparison with traditional, separate latent-variable models for words, or Blockstructures for votes, the Group-Topic model’s joint inference discovers more cohesive groups and improved topics. [sent-10, score-0.368]

6 1 Introduction The field of social network analysis (SNA) has developed mathematical models that discover patterns in interactions among entities. [sent-11, score-0.157]

7 One of the objectives of SNA is to detect salient groups of entities. [sent-12, score-0.235]

8 Group discovery has many applications, such as understanding the social structure of organizations or native tribes, uncovering criminal organizations, and modeling large-scale social networks in Internet services such as Friendster. [sent-13, score-0.28]

9 Social scientists have conducted extensive research on group detection, especially in fields such as anthropology and political science. [sent-16, score-0.295]

10 Recently, statisticians and computer scientists have begun to develop models that specifically discover group memberships [5, 2, 7]. [sent-17, score-0.31]

11 One such model is the stochastic Blockstructures model [7], which discovers the latent groups or classes based on pair-wise relation data. [sent-18, score-0.419]

12 A particular relation holds between a pair of entities (people, countries, organizations, etc. [sent-19, score-0.208]

13 This model is extended in [4] to support an arbitrary number of groups by using a Chinese Restaurant Process prior. [sent-21, score-0.227]

14 The aforementioned models discover latent groups by examining only whether one or more relations exist between a pair of entities. [sent-22, score-0.413]

15 The Group-Topic (GT) model presented in this paper, on the other hand, considers both the relations between entities and also the attributes of the relations (e. [sent-23, score-0.592]

16 , the text associated with the relations) when assigning group memberships. [sent-25, score-0.291]

17 The GT model can be viewed as an extension of the stochastic Blockstructures model [7] with the key addition that group membership is conditioned on a latent variable, which in turn is also associated with the attributes of the relation. [sent-26, score-0.526]

18 In our experiments, the attributes of relations are words, and the latent variable represents the topic responsible for generating those words. [sent-27, score-0.415]

19 Our model captures the (language) attributes associated with interactions, and uses distinctions based on these attributes to better assign group memberships. [sent-28, score-0.534]

20 Consider a legislative body and imagine its members forming coalitions (groups), and voting accordingly. [sent-29, score-0.361]

21 However, different coalitions arise depending on the topic of the resolution up for a vote. [sent-30, score-0.194]

22 In the GT model, the discovery of groups is guided by the emerging topics, and the forming of topics is shaped by emerging groups. [sent-31, score-0.58]

23 Resolutions that would have been assigned the same topic in a model using words alone may be assigned to different topics if they exhibit distinct voting patterns. [sent-32, score-0.828]

24 Topics may be merged if the entities vote very similarly on them. [sent-33, score-0.302]

25 Likewise, multiple different divisions of entities into groups are made possible by conditioning them on the topics. [sent-34, score-0.401]

26 However, the ART model does not explicitly discover groups formed by entities. [sent-37, score-0.289]

27 When forming latent groups, the GT model simultaneously discovers salient topics relevant to relationships between entities—topics which the models that only examine words are unable to detect. [sent-38, score-0.625]

28 We demonstrate the capabilities of the GT model by applying it to two large sets of voting data: one from US Senate and the other from the General Assembly of the UN. [sent-39, score-0.242]

29 The model clusters voting entities into coalitions and simultaneously discovers topics for word attributes describing the relations (bills or resolutions) between entities. [sent-40, score-1.282]

30 We find that the groups obtained from the GT model are significantly more cohesive (p-value < 0. [sent-41, score-0.29]

31 The GT model also discovers new and more salient topics that help better predict entities’ behaviors. [sent-43, score-0.518]

32 2 Group-Topic Model The Group-Topic model is a directed graphical model that clusters entities with relations between them, as well as attributes of those relations. [sent-44, score-0.549]

33 In this paper, we focus on symmetric relations and have words as the attributes on relations. [sent-46, score-0.299]

34 Without considering the topics of events, or by treating all events in a corpus as reflecting a single topic, the simplified model becomes equivalent to the stochastic Blockstructures model [7]. [sent-48, score-0.43]

35 We want to perform joint inference on (text) attributes and relations to obtain topic-wise group memberships. [sent-56, score-0.486]

36 In our case we need to compute the conditional distribution P (gst |w, v, g−st , t, α, β, η) and P (tb |w, v, g, t−b , α, β, η), where g−st denotes the group assignments for all entities except entity s in topic t, and t−b represents the topic assignments for all events except event b. [sent-60, score-0.913]

37 P (tb |v, g, w, t−b , α, β, η) (b) ev (ηv +ctb v −x) v=1 x=1 V (b) ev V v=1 (ηv +ctb v )−x x=1 v=1 V ∝ 2 G g=1 G h=g Γ( (b) k=1 2 Γ(βk +mghk ) (b) (βk +mghk )) k=1 , (b) where ev is the number of tokens of word v in event b. [sent-64, score-0.306]

38 3 Related Work There has been a surge of interest in models that describe relational data, or relations between entities viewed as links in a network, including recent work in group discovery [2, 5]. [sent-67, score-0.625]

39 The group cohesion in GT is significantly better than in baseline. [sent-77, score-0.311]

40 [4] as it takes advantage of information from different modalities by conditioning group membership on topics. [sent-79, score-0.321]

41 As an extension of ART model, RART clusters together entities with similar roles. [sent-81, score-0.243]

42 In contrast, the GT model presented here clusters entities into groups based on their relations to other entities. [sent-82, score-0.582]

43 There has been a considerable amount of previous work in understanding voting patterns. [sent-83, score-0.208]

44 They apply this model to voting data in the 108th US Senate where the behavior of an entity is its vote on a resolution. [sent-85, score-0.429]

45 However, unlike [3], since our goal is to cluster entities based on the similarity of their voting patterns, we are only interested in whether a pair of entities voted the same or differently, not their actual yes/no votes. [sent-87, score-0.671]

46 4 Experimental Results We present experiments applying the GT model to the voting records of members of two legislative bodies: the US Senate and the UN General Assembly. [sent-89, score-0.332]

47 For comparison, we present the results of a baseline method that first uses a mixture of unigrams to discover topics and associate a topic with each resolution, and then runs the Blockstructures model [7] separately on the resolutions assigned to each topic. [sent-90, score-0.776]

48 This baseline approach is similar to the GT model in that it discovers both groups and topics, and has different group assignments on different topics. [sent-91, score-0.629]

49 We are interested in the quality of both the groups and the topics. [sent-93, score-0.193]

50 In the political science literature, group cohesion is quantified by the Agreement Index (AI) [3], which, based on the number of group members that vote Yes, No or Abstain, measures the similarity of votes cast by members of a group during a particular roll call. [sent-94, score-1.219]

51 The group cohesion using the GT model is found to be significantly greater than the baseline group cohesion under pairwise t-test, as shown in Table 1 for both datasets, which indicates that the GT model is better able to capture cohesive groups. [sent-96, score-0.795]

52 1 The US Senate Dataset Our Senate dataset consists of the voting records of Senators in the 101st-109th US Senate (1989-2005) obtained from the Library of Congress THOMAS database. [sent-98, score-0.251]

53 During a roll call for a particular bill, a Senator may respond Yea or Nay to the question that has been put to vote, else the vote will be recorded as Not Voting. [sent-99, score-0.138]

54 Since there are far fewer words than Economic federal labor insurance aid tax business employee care Education education school aid children drug students elementary prevention Military Misc. [sent-103, score-0.432]

55 government military foreign tax congress aid law policy Energy energy power water nuclear gas petrol research pollution Table 2: Top words for topics generated with the mixture of unigrams model on the Senate dataset. [sent-104, score-0.991]

56 The topics are influenced by both the words and votes on the bills. [sent-107, score-0.532]

57 We cluster the data into 4 topics and 4 groups (cluster sizes are chosen somewhat arbitrarily) and compare the results of GT with the baseline. [sent-109, score-0.523]

58 The most likely words for each topic from the traditional mixture of unigrams model is shown in Table 2, whereas the topics obtained using GT are shown in Table 3. [sent-110, score-0.635]

59 The GT model collapses the topics Education and Energy together into Education and Domestic, since the voting patterns on those topics are quite similar. [sent-111, score-0.902]

60 The new topic Social Security + Medicare did not have strong enough word coherence to appear in the baseline model, but it has a very distinct voting pattern, and thus is clearly found by the GT model. [sent-112, score-0.435]

61 Thus, importantly, GT discovers topics that help predict people’s behavior and relations, not simply word co-occurrences. [sent-113, score-0.496]

62 Examining the group distribution across topics in the GT model, we find that on the topic Economic the Republicans form a single group whereas the Democrats split into 3 groups indicating that Democrats have been somewhat divided on this topic. [sent-114, score-1.15]

63 The group membership of Senators on Education + Domestic issues is shown in Table 4. [sent-116, score-0.286]

64 We see that the first group of Republicans include a Democratic Senator from Texas, a state that usually votes Republican. [sent-117, score-0.389]

65 Jeffords who left the Republican Party to become an Independent and has championed legislation to strengthen education and environmental protection. [sent-120, score-0.169]

66 Nearly all the Republican Senators in Group 4 (in Table 4) are advocates for education and many of them have been awarded for their efforts. [sent-121, score-0.169]

67 Everything Nuclear nuclear weapons use implementation countries Human Rights rights human palestine situation israel Security in Middle East occupied israel syria security calls Table 5: Top words for topics generated from mixture of unigrams model with the UN dataset. [sent-125, score-1.254]

68 Only text information is utilized to form the topics, as opposed to Table 6 where our GT model takes advantage of both text and voting information. [sent-126, score-0.328]

69 Many of the Senators in Group 3 have also focused on education and other domestic issues such as energy, however, they often have a more liberal stance than those in Group 4, and come from states that are historically less conservative. [sent-132, score-0.281]

70 Kassebaum is known to be uncomfortable with many Republican views on domestic issues such as education, and has voted against voluntary prayer in school. [sent-136, score-0.159]

71 We also inspect the Senators that switch groups the most across topics in the GT model. [sent-138, score-0.523]

72 Shelby (D-AL) votes with the Republicans on Economic, with the Democrats on Education + Domestic and with a small group of maverick Republicans on Foreign and Social Security + Medicare. [sent-141, score-0.389]

73 2 The United Nations Dataset The second dataset involves the voting record of the UN General Assembly1 . [sent-146, score-0.251]

74 We focus on the resolutions discussed from 1990-2003, which contain votes of 192 countries on 931 resolutions. [sent-147, score-0.348]

75 If a country is present during the roll call, it may choose to vote Yes, No or 1 http://home. [sent-148, score-0.169]

76 htm G R O U P ↓ 1 2 3 4 5 Nuclear Nonproliferation nuclear states united weapons nations Brazil Columbia Chile Peru Venezuela. [sent-151, score-0.373]

77 Nuclear Arms Race nuclear arms prevention race space UK France Spain Monaco East-Timor India Russia Micronesia Japan Germany Italy. [sent-169, score-0.416]

78 USA Israel Palau Human Rights rights human palestine occupied israel Brazil Mexico Columbia Chile Peru. [sent-178, score-0.135]

79 Table 6: Top words for topics generated from the GT model with the UN dataset as well as the corresponding groups for each topic (column). [sent-196, score-0.792]

80 The countries listed for each group are ordered by their 2005 GDP (PPP). [sent-197, score-0.389]

81 Because we parameterize agreement and not the votes themselves, this 3value setting does not require any change to our model. [sent-200, score-0.141]

82 In experiments with this dataset, we use a weighting factor 500 for text (adjusting the likelihood of text by a power of 500 so as to make it comparable with the likelihood of pairs of votes for each resolution). [sent-201, score-0.227]

83 We cluster this dataset into 3 topics and 5 groups (chosen somewhat arbitrarily). [sent-202, score-0.566]

84 The most probable words in each topic from the mixture of unigrams model is shown in Table 5. [sent-203, score-0.305]

85 For example, Everything Nuclear constitutes all resolutions that have anything to do with the use of nuclear technology, including nuclear weapons. [sent-204, score-0.586]

86 These two issues had drastically different voting patterns in the UN, as can be seen in the contrasting group structure for those topics in Table 6. [sent-206, score-0.786]

87 Thus, again, the GT model is able to discover more salient topics—topics that reflect the voting patterns and coalitions, not simply word co-occurrence alone. [sent-207, score-0.4]

88 The countries in Table 6 are ranked by their GDP in 2005. [sent-208, score-0.141]

89 2 As seen in Table 6, groups formed in Nuclear Arms Race are unlike the groups formed in other topics. [sent-209, score-0.386]

90 These groups map well to the global political situation of that time when, despite the end of the Cold War, there was mutual distrust between Russia and the US with regard to the continued manufacture of nuclear weapons. [sent-210, score-0.5]

91 For missions to outer space and nuclear arms, India was a staunch ally of Russia, while Israel was an ally of the US. [sent-211, score-0.322]

92 5 Conclusions We introduce the Group-Topic model that jointly discovers latent groups in a network as well as clusters of attributes (or topics) of events that influence the interaction between entities. [sent-212, score-0.578]

93 The model extends prior work on latent group discovery by capturing not only pair-wise relations between entities but also multiple attributes of the relations (in particular, words describing the relations). [sent-213, score-1.004]

94 In this way the GT model obtains more cohesive groups as well as salient topics that influence the interaction between groups. [sent-214, score-0.662]

95 This paper demonstrates that the Group-Topic model is able to discover topics capturing the group based interactions between members of a legislative body. [sent-215, score-0.764]

96 The model can be applied not just to voting data, but any data having relations with attributes. [sent-216, score-0.354]

97 We are now using the model to analyze the citations in academic papers capturing the topics of research papers and discovering research groups. [sent-217, score-0.364]

98 The model can be altered suitably to consider other categorical, multi-dimensional, and continuous attributes characterizing relations. [sent-218, score-0.16]

99 In Table 6, we omit some countries (represented by . [sent-254, score-0.141]

100 ) in order to show other interesting but relatively low-ranked countries (for example, Russia) in the GDP list. [sent-257, score-0.141]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('topics', 0.33), ('gt', 0.312), ('nuclear', 0.26), ('group', 0.248), ('entities', 0.208), ('voting', 0.208), ('senate', 0.204), ('groups', 0.193), ('education', 0.169), ('countries', 0.141), ('votes', 0.141), ('topic', 0.131), ('security', 0.126), ('attributes', 0.126), ('domestic', 0.112), ('discovers', 0.112), ('relations', 0.112), ('blockstructures', 0.11), ('gst', 0.11), ('republicans', 0.11), ('senators', 0.11), ('social', 0.095), ('democrats', 0.094), ('vote', 0.094), ('entity', 0.093), ('tb', 0.087), ('republican', 0.082), ('arms', 0.081), ('tax', 0.079), ('unigrams', 0.079), ('race', 0.075), ('event', 0.07), ('russia', 0.068), ('resolutions', 0.066), ('coalitions', 0.063), ('cohesion', 0.063), ('cohesive', 0.063), ('india', 0.063), ('medicare', 0.063), ('rights', 0.063), ('discover', 0.062), ('un', 0.062), ('words', 0.061), ('discovery', 0.057), ('gdp', 0.055), ('congress', 0.055), ('foreign', 0.055), ('word', 0.054), ('table', 0.053), ('economic', 0.05), ('hk', 0.049), ('bills', 0.047), ('democrat', 0.047), ('ev', 0.047), ('insurance', 0.047), ('legislative', 0.047), ('mexico', 0.047), ('mghk', 0.047), ('political', 0.047), ('senator', 0.047), ('shelby', 0.047), ('voinovich', 0.047), ('voted', 0.047), ('weapons', 0.047), ('latent', 0.046), ('roll', 0.044), ('members', 0.043), ('text', 0.043), ('dataset', 0.043), ('salient', 0.042), ('baseline', 0.042), ('brazil', 0.041), ('tokens', 0.041), ('israel', 0.041), ('membership', 0.038), ('aid', 0.038), ('clusters', 0.035), ('modalities', 0.035), ('united', 0.035), ('model', 0.034), ('organizations', 0.033), ('assigned', 0.032), ('events', 0.032), ('ally', 0.031), ('armstrong', 0.031), ('azerbaijan', 0.031), ('belarus', 0.031), ('chafee', 0.031), ('country', 0.031), ('ctb', 0.031), ('danforth', 0.031), ('indonesia', 0.031), ('jakulin', 0.031), ('jeffords', 0.031), ('kassebaum', 0.031), ('mg', 0.031), ('nations', 0.031), ('nonproliferation', 0.031), ('ntg', 0.031), ('palestine', 0.031)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 89 nips-2005-Group and Topic Discovery from Relations and Their Attributes

Author: Xuerui Wang, Natasha Mohanty, Andrew McCallum

Abstract: We present a probabilistic generative model of entity relationships and their attributes that simultaneously discovers groups among the entities and topics among the corresponding textual attributes. Block-models of relationship data have been studied in social network analysis for some time. Here we simultaneously cluster in several modalities at once, incorporating the attributes (here, words) associated with certain relationships. Significantly, joint inference allows the discovery of topics to be guided by the emerging groups, and vice-versa. We present experimental results on two large data sets: sixteen years of bills put before the U.S. Senate, comprising their corresponding text and voting records, and thirteen years of similar data from the United Nations. We show that in comparison with traditional, separate latent-variable models for words, or Blockstructures for votes, the Group-Topic model’s joint inference discovers more cohesive groups and improved topics. 1

2 0.18709324 60 nips-2005-Dynamic Social Network Analysis using Latent Space Models

Author: Purnamrita Sarkar, Andrew W. Moore

Abstract: This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n gets large. The generalized model associates each entity with a point in p-dimensional Euclidian latent space. The points can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are close in latent space. We show how to make such a model tractable (subquadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional kd-trees; a new efficient dynamic adaptation of multidimensional scaling for a first pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for non-linear local optimization in which amortized time per entity during an update is O(log n). We use both synthetic and real-world data on upto 11,000 entities which indicate linear scaling in computation time and improved performance over four alternative approaches. We also illustrate the system operating on twelve years of NIPS co-publication data. We present a detailed version of this work in [1]. 1

3 0.15955485 52 nips-2005-Correlated Topic Models

Author: John D. Lafferty, David M. Blei

Abstract: Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than x-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [1]. We derive a mean-field variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. The CTM gives a better fit than LDA on a collection of OCRed articles from the journal Science. Furthermore, the CTM provides a natural way of visualizing and exploring this and other unstructured data sets. 1

4 0.12604928 185 nips-2005-Subsequence Kernels for Relation Extraction

Author: Raymond J. Mooney, Razvan C. Bunescu

Abstract: We present a new kernel method for extracting semantic relations between entities in natural language text, based on a generalization of subsequence kernels. This kernel uses three types of subsequence patterns that are typically employed in natural language to assert relationships between two entities. Experiments on extracting protein interactions from biomedical corpora and top-level relations from newspaper corpora demonstrate the advantages of this approach. 1

5 0.082108408 48 nips-2005-Context as Filtering

Author: Daichi Mochihashi, Yuji Matsumoto

Abstract: Long-distance language modeling is important not only in speech recognition and machine translation, but also in high-dimensional discrete sequence modeling in general. However, the problem of context length has almost been neglected so far and a na¨ve bag-of-words history has been ı employed in natural language processing. In contrast, in this paper we view topic shifts within a text as a latent stochastic process to give an explicit probabilistic generative model that has partial exchangeability. We propose an online inference algorithm using particle filters to recognize topic shifts to employ the most appropriate length of context automatically. Experiments on the BNC corpus showed consistent improvement over previous methods involving no chronological order. 1

6 0.070352986 111 nips-2005-Learning Influence among Interacting Markov Chains

7 0.068279445 72 nips-2005-Fast Online Policy Gradient Learning with SMD Gain Vector Adaptation

8 0.065028258 188 nips-2005-Temporally changing synaptic plasticity

9 0.064719275 100 nips-2005-Interpolating between types and tokens by estimating power-law generators

10 0.061875593 130 nips-2005-Modeling Neuronal Interactivity using Dynamic Bayesian Networks

11 0.059167162 55 nips-2005-Describing Visual Scenes using Transformed Dirichlet Processes

12 0.056983627 87 nips-2005-Goal-Based Imitation as Probabilistic Inference over Graphical Models

13 0.049996119 96 nips-2005-Inference with Minimal Communication: a Decision-Theoretic Variational Approach

14 0.040209487 9 nips-2005-A Domain Decomposition Method for Fast Manifold Learning

15 0.040147707 113 nips-2005-Learning Multiple Related Tasks using Latent Independent Component Analysis

16 0.038932558 54 nips-2005-Data-Driven Online to Batch Conversions

17 0.035302829 115 nips-2005-Learning Shared Latent Structure for Image Synthesis and Robotic Imitation

18 0.035207663 178 nips-2005-Soft Clustering on Graphs

19 0.034603667 21 nips-2005-An Alternative Infinite Mixture Of Gaussian Process Experts

20 0.034012295 56 nips-2005-Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.119), (1, -0.002), (2, 0.043), (3, 0.047), (4, -0.014), (5, -0.167), (6, 0.056), (7, 0.143), (8, -0.087), (9, 0.035), (10, -0.135), (11, 0.004), (12, -0.088), (13, -0.236), (14, -0.129), (15, 0.016), (16, -0.08), (17, 0.078), (18, -0.02), (19, -0.036), (20, -0.047), (21, -0.094), (22, 0.054), (23, 0.006), (24, -0.071), (25, -0.044), (26, -0.049), (27, 0.004), (28, -0.062), (29, 0.195), (30, -0.04), (31, 0.198), (32, -0.005), (33, 0.047), (34, 0.038), (35, -0.111), (36, -0.073), (37, 0.164), (38, 0.02), (39, 0.174), (40, 0.084), (41, 0.014), (42, 0.054), (43, -0.03), (44, -0.103), (45, 0.067), (46, -0.017), (47, -0.087), (48, -0.097), (49, -0.14)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97542191 89 nips-2005-Group and Topic Discovery from Relations and Their Attributes

Author: Xuerui Wang, Natasha Mohanty, Andrew McCallum

Abstract: We present a probabilistic generative model of entity relationships and their attributes that simultaneously discovers groups among the entities and topics among the corresponding textual attributes. Block-models of relationship data have been studied in social network analysis for some time. Here we simultaneously cluster in several modalities at once, incorporating the attributes (here, words) associated with certain relationships. Significantly, joint inference allows the discovery of topics to be guided by the emerging groups, and vice-versa. We present experimental results on two large data sets: sixteen years of bills put before the U.S. Senate, comprising their corresponding text and voting records, and thirteen years of similar data from the United Nations. We show that in comparison with traditional, separate latent-variable models for words, or Blockstructures for votes, the Group-Topic model’s joint inference discovers more cohesive groups and improved topics. 1

2 0.58702052 185 nips-2005-Subsequence Kernels for Relation Extraction

Author: Raymond J. Mooney, Razvan C. Bunescu

Abstract: We present a new kernel method for extracting semantic relations between entities in natural language text, based on a generalization of subsequence kernels. This kernel uses three types of subsequence patterns that are typically employed in natural language to assert relationships between two entities. Experiments on extracting protein interactions from biomedical corpora and top-level relations from newspaper corpora demonstrate the advantages of this approach. 1

3 0.58037561 60 nips-2005-Dynamic Social Network Analysis using Latent Space Models

Author: Purnamrita Sarkar, Andrew W. Moore

Abstract: This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n gets large. The generalized model associates each entity with a point in p-dimensional Euclidian latent space. The points can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are close in latent space. We show how to make such a model tractable (subquadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional kd-trees; a new efficient dynamic adaptation of multidimensional scaling for a first pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for non-linear local optimization in which amortized time per entity during an update is O(log n). We use both synthetic and real-world data on upto 11,000 entities which indicate linear scaling in computation time and improved performance over four alternative approaches. We also illustrate the system operating on twelve years of NIPS co-publication data. We present a detailed version of this work in [1]. 1

4 0.50723076 52 nips-2005-Correlated Topic Models

Author: John D. Lafferty, David M. Blei

Abstract: Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than x-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [1]. We derive a mean-field variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. The CTM gives a better fit than LDA on a collection of OCRed articles from the journal Science. Furthermore, the CTM provides a natural way of visualizing and exploring this and other unstructured data sets. 1

5 0.37184164 48 nips-2005-Context as Filtering

Author: Daichi Mochihashi, Yuji Matsumoto

Abstract: Long-distance language modeling is important not only in speech recognition and machine translation, but also in high-dimensional discrete sequence modeling in general. However, the problem of context length has almost been neglected so far and a na¨ve bag-of-words history has been ı employed in natural language processing. In contrast, in this paper we view topic shifts within a text as a latent stochastic process to give an explicit probabilistic generative model that has partial exchangeability. We propose an online inference algorithm using particle filters to recognize topic shifts to employ the most appropriate length of context automatically. Experiments on the BNC corpus showed consistent improvement over previous methods involving no chronological order. 1

6 0.36797044 171 nips-2005-Searching for Character Models

7 0.32268316 100 nips-2005-Interpolating between types and tokens by estimating power-law generators

8 0.32201111 111 nips-2005-Learning Influence among Interacting Markov Chains

9 0.27769935 107 nips-2005-Large scale networks fingerprinting and visualization using the k-core decomposition

10 0.22669114 72 nips-2005-Fast Online Policy Gradient Learning with SMD Gain Vector Adaptation

11 0.22323574 188 nips-2005-Temporally changing synaptic plasticity

12 0.21507317 55 nips-2005-Describing Visual Scenes using Transformed Dirichlet Processes

13 0.20181865 9 nips-2005-A Domain Decomposition Method for Fast Manifold Learning

14 0.18749847 33 nips-2005-Bayesian Sets

15 0.18383792 130 nips-2005-Modeling Neuronal Interactivity using Dynamic Bayesian Networks

16 0.16558182 68 nips-2005-Factorial Switching Kalman Filters for Condition Monitoring in Neonatal Intensive Care

17 0.16409227 62 nips-2005-Efficient Estimation of OOMs

18 0.16069624 142 nips-2005-Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games

19 0.1545056 112 nips-2005-Learning Minimum Volume Sets

20 0.15392813 125 nips-2005-Message passing for task redistribution on sparse graphs


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.022), (10, 0.037), (11, 0.015), (27, 0.066), (31, 0.037), (34, 0.039), (39, 0.01), (41, 0.013), (44, 0.021), (50, 0.013), (55, 0.019), (69, 0.049), (73, 0.031), (80, 0.479), (88, 0.042), (91, 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.87951767 89 nips-2005-Group and Topic Discovery from Relations and Their Attributes

Author: Xuerui Wang, Natasha Mohanty, Andrew McCallum

Abstract: We present a probabilistic generative model of entity relationships and their attributes that simultaneously discovers groups among the entities and topics among the corresponding textual attributes. Block-models of relationship data have been studied in social network analysis for some time. Here we simultaneously cluster in several modalities at once, incorporating the attributes (here, words) associated with certain relationships. Significantly, joint inference allows the discovery of topics to be guided by the emerging groups, and vice-versa. We present experimental results on two large data sets: sixteen years of bills put before the U.S. Senate, comprising their corresponding text and voting records, and thirteen years of similar data from the United Nations. We show that in comparison with traditional, separate latent-variable models for words, or Blockstructures for votes, the Group-Topic model’s joint inference discovers more cohesive groups and improved topics. 1

2 0.57648897 52 nips-2005-Correlated Topic Models

Author: John D. Lafferty, David M. Blei

Abstract: Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than x-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [1]. We derive a mean-field variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. The CTM gives a better fit than LDA on a collection of OCRed articles from the journal Science. Furthermore, the CTM provides a natural way of visualizing and exploring this and other unstructured data sets. 1

3 0.54420918 35 nips-2005-Bayesian model learning in human visual perception

Author: Gergő Orbán, Jozsef Fiser, Richard N. Aslin, Máté Lengyel

Abstract: Humans make optimal perceptual decisions in noisy and ambiguous conditions. Computations underlying such optimal behavior have been shown to rely on probabilistic inference according to generative models whose structure is usually taken to be known a priori. We argue that Bayesian model selection is ideal for inferring similar and even more complex model structures from experience. We find in experiments that humans learn subtle statistical properties of visual scenes in a completely unsupervised manner. We show that these findings are well captured by Bayesian model learning within a class of models that seek to explain observed variables by independent hidden causes. 1

4 0.2539607 48 nips-2005-Context as Filtering

Author: Daichi Mochihashi, Yuji Matsumoto

Abstract: Long-distance language modeling is important not only in speech recognition and machine translation, but also in high-dimensional discrete sequence modeling in general. However, the problem of context length has almost been neglected so far and a na¨ve bag-of-words history has been ı employed in natural language processing. In contrast, in this paper we view topic shifts within a text as a latent stochastic process to give an explicit probabilistic generative model that has partial exchangeability. We propose an online inference algorithm using particle filters to recognize topic shifts to employ the most appropriate length of context automatically. Experiments on the BNC corpus showed consistent improvement over previous methods involving no chronological order. 1

5 0.22130875 72 nips-2005-Fast Online Policy Gradient Learning with SMD Gain Vector Adaptation

Author: Jin Yu, Douglas Aberdeen, Nicol N. Schraudolph

Abstract: Reinforcement learning by direct policy gradient estimation is attractive in theory but in practice leads to notoriously ill-behaved optimization problems. We improve its robustness and speed of convergence with stochastic meta-descent, a gain vector adaptation method that employs fast Hessian-vector products. In our experiments the resulting algorithms outperform previously employed online stochastic, offline conjugate, and natural policy gradient methods. 1

6 0.21996894 155 nips-2005-Predicting EMG Data from M1 Neurons with Variational Bayesian Least Squares

7 0.21995005 87 nips-2005-Goal-Based Imitation as Probabilistic Inference over Graphical Models

8 0.21963002 112 nips-2005-Learning Minimum Volume Sets

9 0.21580362 36 nips-2005-Bayesian models of human action understanding

10 0.21499339 153 nips-2005-Policy-Gradient Methods for Planning

11 0.21349384 124 nips-2005-Measuring Shared Information and Coordinated Activity in Neuronal Networks

12 0.21196043 185 nips-2005-Subsequence Kernels for Relation Extraction

13 0.21157603 101 nips-2005-Is Early Vision Optimized for Extracting Higher-order Dependencies?

14 0.21010616 109 nips-2005-Learning Cue-Invariant Visual Responses

15 0.20856242 100 nips-2005-Interpolating between types and tokens by estimating power-law generators

16 0.20738323 119 nips-2005-Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods

17 0.20713164 170 nips-2005-Scaling Laws in Natural Scenes and the Inference of 3D Shape

18 0.20711911 144 nips-2005-Off-policy Learning with Options and Recognizers

19 0.20708491 45 nips-2005-Conditional Visual Tracking in Kernel Space

20 0.20523374 200 nips-2005-Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery