nips nips2012 nips2012-345 knowledge-graph by maker-knowledge-mining

345 nips-2012-Topic-Partitioned Multinetwork Embeddings


Source: pdf

Author: Peter Krafft, Juston Moore, Bruce Desmarais, Hanna M. Wallach

Abstract: We introduce a new Bayesian admixture model intended for exploratory analysis of communication networks—specifically, the discovery and visualization of topic-specific subnetworks in email data sets. Our model produces principled visualizations of email networks, i.e., visualizations that have precise mathematical interpretations in terms of our model and its relationship to the observed data. We validate our modeling assumptions by demonstrating that our model achieves better link prediction performance than three state-of-the-art network models and exhibits topic coherence comparable to that of latent Dirichlet allocation. We showcase our model’s ability to discover and visualize topic-specific communication patterns using a new email data set: the New Hanover County email network. We provide an extensive analysis of these communication patterns, leading us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, we advocate for principled visualization as a primary objective in the development of new network models. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We introduce a new Bayesian admixture model intended for exploratory analysis of communication networks—specifically, the discovery and visualization of topic-specific subnetworks in email data sets. [sent-6, score-1.259]

2 Our model produces principled visualizations of email networks, i. [sent-7, score-0.839]

3 , visualizations that have precise mathematical interpretations in terms of our model and its relationship to the observed data. [sent-9, score-0.273]

4 We validate our modeling assumptions by demonstrating that our model achieves better link prediction performance than three state-of-the-art network models and exhibits topic coherence comparable to that of latent Dirichlet allocation. [sent-10, score-0.253]

5 We showcase our model’s ability to discover and visualize topic-specific communication patterns using a new email data set: the New Hanover County email network. [sent-11, score-1.719]

6 We provide an extensive analysis of these communication patterns, leading us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. [sent-12, score-1.291]

7 Finally, we advocate for principled visualization as a primary objective in the development of new network models. [sent-13, score-0.282]

8 1 Introduction The structures of organizational communication networks are critical to collaborative problem solving [1]. [sent-14, score-0.368]

9 Although it is seldom possible for researchers to directly observe complete organizational communication networks, email data sets provide one means by which they can at least partially observe and reason about them. [sent-15, score-1.007]

10 As a result—and especially in light of their rich textual detail, existing infrastructure, and widespread usage—email data sets hold the potential to answer many important scientific and practical questions within the organizational and social sciences. [sent-16, score-0.147]

11 While some questions may be answered by studying the structure of an email network as a whole, other, more nuanced, questions can only be answered at finer levels of granularity—specifically, by studying topic-specific subnetworks. [sent-17, score-0.994]

12 For example, breaks in communication (or duplicated communication) about particular topics may indicate a need for some form of organizational restructuring. [sent-18, score-0.45]

13 In order to facilitate the study of these kinds of questions, we present a new Bayesian admixture model intended for discovering and summarizing topic-specific communication subnetworks in email data sets. [sent-19, score-1.149]

14 There are a number of probabilistic models that incorporate both network and text data. [sent-20, score-0.146]

15 Although some of these models are specifically for email networks (e. [sent-21, score-0.708]

16 ’s author–recipient– topic model [2]), most are intended for networks of documents, such as web pages and the links between them [3] or academic papers and their citations [4]. [sent-24, score-0.259]

17 In contrast, an email network is more naturally viewed as a network of actors exchanging documents, i. [sent-25, score-1.044]

18 , actors are associated with nodes while documents are associated with edges. [sent-27, score-0.254]

19 In other words, an email network defines a multinetwork in which there may be multiple edges (one per email) between any pair of actors. [sent-28, score-0.937]

20 Instead, we take a complementary approach and focus on exploratory analysis. [sent-31, score-0.075]

21 Specifically, our goal is to discover and visualize topic-specific subnetworks. [sent-32, score-0.088]

22 If network modeling and visualization are undertaken separately, the resultant visualizations may not directly reflect the model and its relationship to the observed data. [sent-34, score-0.382]

23 Rather, these visualizations provide a view of the model and the data seen through the lens of the visualization algorithm and its associated assumptions, so any conclusions drawn from such visualizations can be biased by artifacts of the visualization algorithm. [sent-35, score-0.461]

24 , visualizations that have precise interpretations in terms of an associated network model and its relationship to the observed data, remains an open challenge in statistical network modeling [5]. [sent-38, score-0.559]

25 Addressing this open challenge was a primary objective in the development of our new model. [sent-39, score-0.045]

26 In order to discover and visualize topic-specific subnetworks, our model must associate each author– recipient edge in the observed email network with a topic, as shown in Figure 1. [sent-40, score-1.029]

27 Our model draws upon ideas from latent Dirichlet allocation (LDA) [6] to identify a set of corpus-wide topics of communication, as well as the subset of topics that best describe each observed email. [sent-41, score-0.239]

28 We model network structure using an approach similar to that of Hoff et al. [sent-42, score-0.111]

29 ’s latent space model (LSM) [7] so as to facilitate visualization. [sent-43, score-0.024]

30 Given an observed network, LSM associates each actor in the network with a point in K-dimensional Euclidean space. [sent-44, score-0.211]

31 If K = 2 or K = 3, these interaction probabilities, collectively known as a “communication pattern”, can be directly visualized in 2- or 3-dimensional space via the locations of the actor-specific points. [sent-46, score-0.044]

32 Our model extends this idea by associating a K-dimensional Euclidean space with each topic. [sent-47, score-0.034]

33 Observed author–recipient edges are explicitly associated with topics via the K-dimensional topic-specific communication patterns. [sent-48, score-0.42]

34 In the next section, we present the mathematical details of our new model and outline a corresponding inference algorithm. [sent-49, score-0.023]

35 We then introduce a new email data set: the New Hanover County (NHC) email network. [sent-50, score-1.326]

36 Although our model is intended for exploratory analysis, we test our modeling assumptions via three validation tasks. [sent-51, score-0.146]

37 1, we show that our model achieves better link prediction performance than three state-of-the-art network models. [sent-53, score-0.111]

38 We also demonstrate that our model is capable of inferring topics that are as coherent as those inferred using LDA. [sent-54, score-0.106]

39 Together, these experiments indicate that our model is an appropriate model of network structure and that modeling this structure does not compromise topic quality. [sent-55, score-0.253]

40 As a final validation experiment, we show that synthetic data generated using our model possesses similar network statistics to those of the NHC email network. [sent-56, score-0.774]

41 4, we showcase our model’s ability to discover and visualize topic-specific communication patterns using the NHC network. [sent-58, score-0.393]

42 We give an extensive analysis of these communication patterns and demonstrate that they provide accessible visualizations of emailbased collaboration while possessing precise, meaningful interpretations within the mathematical framework of our model. [sent-59, score-0.492]

43 These findings lead us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. [sent-60, score-1.053]

44 Finally, we advocate for principled visualization as a primary objective in the development of new network models. [sent-61, score-0.282]

45 2 Topic-Partitioned Multinetwork Embeddings In this section, we present our new probabilistic generative model (and associated inference algorithm) for communication networks. [sent-62, score-0.313]

46 For concreteness, we frame our discussion of this model in 2 terms of email data, although it is generally applicable to any similarly-structured communication data. [sent-63, score-0.901]

47 The generative process and graphical model are provided in the supplementary materials. [sent-64, score-0.032]

48 (d) (d) A single email, indexed by d, is represented by a set of tokens w(d) = {wn }N that comprise the n=1 text of that email, an integer a(d) ∈ {1, . [sent-65, score-0.119]

49 , A} indicating the identity of that email’s author, and a (d) set of binary variables y (d) = {yr }A indicating whether each of the A actors in the network is r=1 a recipient of that email. [sent-68, score-0.36]

50 For simplicity, we assume that authors do not send emails to themselves (d) (i. [sent-69, score-0.104]

51 Given a real-world email data set D = {{w(d) , a(d) , y (d) }}D , our d=1 model permits inference of the topics expressed in the text of the emails, a set of topic-specific K-dimensional embeddings (i. [sent-72, score-0.844]

52 , points in K-dimensional Euclidean space) of the A actors in the network, and a partition of the full communication network into a set of topic-specific subnetworks. [sent-74, score-0.489]

53 A symmetric Dirichlet prior with concentration parameter β is placed over Φ = {φ(1) , . [sent-76, score-0.022]

54 To capture the relationship between the topics expressed in an email and that email’s recipients, each topic t is also associated with a “communication pattern”: an A × A (t) matrix of probabilities P (t) . [sent-80, score-0.957]

55 Given an email about topic t, authored by actor a, element par is the probability of actor a including actor r as a recipient of that email. [sent-81, score-1.176]

56 Inspired by LSM [7], each communication pattern P (t) is represented implicitly via a set of A points in K-dimensional Euclidean (t) (t) (t) (t) (t) space S (t) = {sa }A and a scalar bias term b(t) such that par = pra = σ(b(t) − sa − sr ) a=1 (t) 2 2 with sa ∼ N (0, σ1 I) and b(t) ∼ N (µ, σ2 ). [sent-82, score-0.618]

57 1 If K = 2 or K = 3, this representation enables each topic-specific communication pattern to be visualized in 2- or 3-dimensional space via the locations of the points associated with the A actors. [sent-83, score-0.347]

58 In isolation, each point sa conveys no information; however, the distance between any two points has a precise and meaningful interpretation in the generative process. [sent-85, score-0.233]

59 Specifically, the recipients of any email associated with topic t are more likely to be those actors near to the email’s author in the Euclidean space corresponding to that topic. [sent-86, score-1.138]

60 Each email, indexed by d, has a discrete distribution over topics θ (d) . [sent-87, score-0.106]

61 A symmetric Dirichlet prior (d) with concentration parameter α is placed over Θ = {θ (1) , . [sent-88, score-0.022]

62 Each token wn is associated (d) (d) (d) (d) with a topic assignment zn , such that zn ∼ θ (d) and wn ∼ φ(t) for zn = t. [sent-92, score-0.765]

63 Our model does not include a distribution over authors; the generative process is conditioned upon their identities. [sent-93, score-0.032]

64 (d) The email-specific binary variables y (d) = {yr }A indicate the recipients of email d and thus the r=1 presence (or absence) of email-specific edges from author a(d) to each of the A − 1 other actors. [sent-94, score-0.867]

65 Consequently, there may be multiple edges (one per email) between any pair of actors, and D defines a multinetwork over the entire set of actors. [sent-95, score-0.163]

66 We assume that the complete multinetwork comprises T (d) topic-specific subnetworks. [sent-96, score-0.13]

67 In other words, each yr is associated with some topic t and therefore (t) (d) with topic-specific communication pattern P (t) such that yr ∼ Bern(par ) for a(d) = a. [sent-97, score-1.206]

68 A better approach, advocated (d) by Blei and Jordan, is to draw a topic assignment for each yr from the empirical distribution over (d) topics defined by z . [sent-100, score-0.67]

69 By definition, the set of topics associated with edges will therefore be a subset of the topics associated with tokens. [sent-101, score-0.331]

70 One way of simulating this generative process is to associate (d) (d) each yr with a position n = 1, . [sent-102, score-0.477]

71 , max (1, N (d) ) and therefore with the topic assignment zn at (d) (d) that position2 by drawing a position assignment xr ∼ U(1, . [sent-105, score-0.496]

72 This (d) (t) (d) (d) indirect procedure ensures that yr ∼ Bern(par ) for a(d) = a, xr = n, and zn = t, as desired. [sent-109, score-0.658]

73 , N (d) = 0) convey information about the frequencies of communication between their authors and recipients. [sent-113, score-0.276]

74 As a result, we do not omit such emails from D; instead, we (d) (d) augment each one with a single, “dummy” topic assignment z1 for which there is no associated token w1 . [sent-114, score-0.336]

75 1 Inference For real-world data D = {w(d) , a(d) , y (d) }D , the tokens W = {w(d) }D , authors A = d=1 d=1 {a(d) }D , and recipients Y = {y (d) }D are observed, while Φ, Θ, S = {S (t) }T , B = {b(t) }T , t=1 t=1 d=1 d=1 Z = {z (d) }D , and X = {x(d) }D are unobserved. [sent-116, score-0.175]

76 In this section, we outline a Metropolis-within-Gibbs sampling algorithm that operates by sequentially (t) (d) (d) resampling the value of each latent variable (i. [sent-118, score-0.023]

77 , sa , bt , zn , or xr ) from its conditional posterior. [sent-120, score-0.395]

78 Count N (t) is the total number of tokens in W assigned to topic t by Z, of which N (v|t) are of type v and N (t|d) (d) belong to email d. [sent-122, score-0.848]

79 New values for discrete random variable xr may be sampled directly using (t) (d) P (x(d) = n | A, Y, S, B, zn = t, Z\d,n ) ∝ (pa(d) r ) r (d) yr (t) (1 − pa(d) r ) (d) 1−yr . [sent-123, score-0.639]

80 (t) New values for continuous random variables sa and b(t) cannot be sampled directly from their conditional posteriors, but may instead be obtained using the Metropolis–Hastings algorithm. [sent-124, score-0.147]

81 With (t) (t) (t) a non-informative prior over sa (i. [sent-125, score-0.147]

82 Likewise, with an improper, noninformative prior over b(t) (i. [sent-129, score-0.021]

83 , b(t) ∼ N (0, ∞)), the conditional posterior over b(t) is A P (b(t) | A, Y, S (t) , Z, X ) ∝ (p(t) ) ar N (1|a,r,t) +N (1|r,a,t) N (0|a,r,t) +N (0|r,a,t) (1 − p(t) ) ar . [sent-131, score-0.106]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('email', 0.663), ('yr', 0.391), ('communication', 0.238), ('sa', 0.147), ('visualizations', 0.14), ('actors', 0.14), ('subnetworks', 0.138), ('multinetwork', 0.13), ('zn', 0.128), ('topic', 0.121), ('xr', 0.12), ('network', 0.111), ('recipient', 0.109), ('topics', 0.106), ('recipients', 0.092), ('emails', 0.085), ('organizational', 0.085), ('author', 0.079), ('lsm', 0.078), ('nhc', 0.078), ('exploratory', 0.075), ('actor', 0.073), ('wn', 0.065), ('par', 0.064), ('tokens', 0.064), ('visualization', 0.059), ('ar', 0.053), ('assignment', 0.052), ('county', 0.052), ('desmarais', 0.052), ('intended', 0.05), ('interpretations', 0.049), ('visualize', 0.049), ('pa', 0.047), ('answered', 0.046), ('networks', 0.045), ('visualized', 0.044), ('associated', 0.043), ('bern', 0.042), ('questions', 0.042), ('dirichlet', 0.042), ('embeddings', 0.04), ('showcase', 0.04), ('hanover', 0.04), ('discover', 0.039), ('wallach', 0.036), ('admixture', 0.036), ('euclidean', 0.036), ('principled', 0.036), ('token', 0.035), ('text', 0.035), ('associating', 0.034), ('precise', 0.033), ('edges', 0.033), ('generative', 0.032), ('recommend', 0.032), ('amherst', 0.032), ('associate', 0.031), ('advocate', 0.031), ('documents', 0.028), ('patterns', 0.027), ('observed', 0.027), ('primary', 0.024), ('facilitate', 0.024), ('relationship', 0.024), ('outline', 0.023), ('massachusetts', 0.023), ('nuanced', 0.023), ('infrastructure', 0.023), ('position', 0.023), ('links', 0.022), ('producing', 0.022), ('pattern', 0.022), ('studying', 0.022), ('lda', 0.022), ('placed', 0.022), ('blei', 0.021), ('seldom', 0.021), ('noninformative', 0.021), ('citations', 0.021), ('duplicated', 0.021), ('hanna', 0.021), ('hoff', 0.021), ('conveys', 0.021), ('modeling', 0.021), ('development', 0.021), ('lens', 0.02), ('textual', 0.02), ('comprise', 0.02), ('bruce', 0.02), ('dummy', 0.02), ('count', 0.019), ('partitions', 0.019), ('indirect', 0.019), ('possessing', 0.019), ('convey', 0.019), ('collaboration', 0.019), ('csail', 0.019), ('exchanging', 0.019), ('authors', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 345 nips-2012-Topic-Partitioned Multinetwork Embeddings

Author: Peter Krafft, Juston Moore, Bruce Desmarais, Hanna M. Wallach

Abstract: We introduce a new Bayesian admixture model intended for exploratory analysis of communication networks—specifically, the discovery and visualization of topic-specific subnetworks in email data sets. Our model produces principled visualizations of email networks, i.e., visualizations that have precise mathematical interpretations in terms of our model and its relationship to the observed data. We validate our modeling assumptions by demonstrating that our model achieves better link prediction performance than three state-of-the-art network models and exhibits topic coherence comparable to that of latent Dirichlet allocation. We showcase our model’s ability to discover and visualize topic-specific communication patterns using a new email data set: the New Hanover County email network. We provide an extensive analysis of these communication patterns, leading us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, we advocate for principled visualization as a primary objective in the development of new network models. 1

2 0.091757804 124 nips-2012-Factorial LDA: Sparse Multi-Dimensional Text Models

Author: Michael Paul, Mark Dredze

Abstract: Latent variable models can be enriched with a multi-dimensional structure to consider the many latent factors in a text corpus, such as topic, author perspective and sentiment. We introduce factorial LDA, a multi-dimensional model in which a document is influenced by K different factors, and each word token depends on a K-dimensional vector of latent variables. Our model incorporates structured word priors and learns a sparse product of factors. Experiments on research abstracts show that our model can learn latent factors such as research topic, scientific discipline, and focus (methods vs. applications). Our modeling improvements reduce test perplexity and improve human interpretability of the discovered factors. 1

3 0.089007281 250 nips-2012-On-line Reinforcement Learning Using Incremental Kernel-Based Stochastic Factorization

Author: Doina Precup, Joelle Pineau, Andre S. Barreto

Abstract: Kernel-based stochastic factorization (KBSF) is an algorithm for solving reinforcement learning tasks with continuous state spaces which builds a Markov decision process (MDP) based on a set of sample transitions. What sets KBSF apart from other kernel-based approaches is the fact that the size of its MDP is independent of the number of transitions, which makes it possible to control the trade-off between the quality of the resulting approximation and the associated computational cost. However, KBSF’s memory usage grows linearly with the number of transitions, precluding its application in scenarios where a large amount of data must be processed. In this paper we show that it is possible to construct KBSF’s MDP in a fully incremental way, thus freeing the space complexity of this algorithm from its dependence on the number of sample transitions. The incremental version of KBSF is able to process an arbitrary amount of data, which results in a model-based reinforcement learning algorithm that can be used to solve continuous MDPs in both off-line and on-line regimes. We present theoretical results showing that KBSF can approximate the value function that would be computed by conventional kernel-based learning with arbitrary precision. We empirically demonstrate the effectiveness of the proposed algorithm in the challenging threepole balancing task, in which the ability to process a large number of transitions is crucial for success. 1

4 0.075752199 354 nips-2012-Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes

Author: Michael Bryant, Erik B. Sudderth

Abstract: Variational methods provide a computationally scalable alternative to Monte Carlo methods for large-scale, Bayesian nonparametric learning. In practice, however, conventional batch and online variational methods quickly become trapped in local optima. In this paper, we consider a nonparametric topic model based on the hierarchical Dirichlet process (HDP), and develop a novel online variational inference algorithm based on split-merge topic updates. We derive a simpler and faster variational approximation of the HDP, and show that by intelligently splitting and merging components of the variational posterior, we can achieve substantially better predictions of test data than conventional online and batch variational algorithms. For streaming analysis of large datasets where batch analysis is infeasible, we show that our split-merge updates better capture the nonparametric properties of the underlying model, allowing continual learning of new topics.

5 0.074864723 219 nips-2012-Modelling Reciprocating Relationships with Hawkes Processes

Author: Charles Blundell, Jeff Beck, Katherine A. Heller

Abstract: We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data. Social groups are often formed implicitly, through actions among members of groups. Yet many models of social networks use explicitly declared relationships to infer social structure. We consider a particular class of Hawkes processes, a doubly stochastic point process, that is able to model reciprocity between groups of individuals. We then extend the Infinite Relational Model by using these reciprocating Hawkes processes to parameterise its edges, making events associated with edges co-dependent through time. Our model outperforms general, unstructured Hawkes processes as well as structured Poisson process-based models at predicting verbal and email turn-taking, and military conflicts among nations. 1

6 0.069354124 77 nips-2012-Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models

7 0.067945167 274 nips-2012-Priors for Diversity in Generative Latent Variable Models

8 0.065050565 19 nips-2012-A Spectral Algorithm for Latent Dirichlet Allocation

9 0.064690292 102 nips-2012-Distributed Non-Stochastic Experts

10 0.062716998 166 nips-2012-Joint Modeling of a Matrix with Associated Text via Latent Binary Features

11 0.055042353 143 nips-2012-Globally Convergent Dual MAP LP Relaxation Solvers using Fenchel-Young Margins

12 0.054343082 220 nips-2012-Monte Carlo Methods for Maximum Margin Supervised Topic Models

13 0.050894149 298 nips-2012-Scalable Inference of Overlapping Communities

14 0.050536133 332 nips-2012-Symmetric Correspondence Topic Models for Multilingual Text Analysis

15 0.048409481 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

16 0.048364189 12 nips-2012-A Neural Autoregressive Topic Model

17 0.046968918 316 nips-2012-Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models

18 0.040148623 59 nips-2012-Bayesian nonparametric models for bipartite graphs

19 0.03586074 253 nips-2012-On Triangular versus Edge Representations --- Towards Scalable Modeling of Networks

20 0.034713555 172 nips-2012-Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.087), (1, 0.03), (2, -0.036), (3, -0.001), (4, -0.101), (5, -0.011), (6, -0.011), (7, -0.026), (8, 0.017), (9, -0.0), (10, 0.087), (11, 0.07), (12, 0.009), (13, -0.002), (14, 0.03), (15, 0.034), (16, 0.003), (17, -0.012), (18, -0.015), (19, 0.057), (20, 0.042), (21, 0.006), (22, 0.035), (23, -0.074), (24, 0.011), (25, -0.064), (26, 0.049), (27, 0.05), (28, 0.004), (29, -0.005), (30, 0.067), (31, -0.008), (32, 0.001), (33, 0.05), (34, 0.051), (35, -0.035), (36, -0.002), (37, 0.038), (38, -0.04), (39, 0.022), (40, -0.001), (41, 0.032), (42, -0.018), (43, -0.018), (44, -0.031), (45, 0.119), (46, 0.082), (47, -0.034), (48, -0.025), (49, -0.01)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93064553 345 nips-2012-Topic-Partitioned Multinetwork Embeddings

Author: Peter Krafft, Juston Moore, Bruce Desmarais, Hanna M. Wallach

Abstract: We introduce a new Bayesian admixture model intended for exploratory analysis of communication networks—specifically, the discovery and visualization of topic-specific subnetworks in email data sets. Our model produces principled visualizations of email networks, i.e., visualizations that have precise mathematical interpretations in terms of our model and its relationship to the observed data. We validate our modeling assumptions by demonstrating that our model achieves better link prediction performance than three state-of-the-art network models and exhibits topic coherence comparable to that of latent Dirichlet allocation. We showcase our model’s ability to discover and visualize topic-specific communication patterns using a new email data set: the New Hanover County email network. We provide an extensive analysis of these communication patterns, leading us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, we advocate for principled visualization as a primary objective in the development of new network models. 1

2 0.74207002 332 nips-2012-Symmetric Correspondence Topic Models for Multilingual Text Analysis

Author: Kosuke Fukumasu, Koji Eguchi, Eric P. Xing

Abstract: Topic modeling is a widely used approach to analyzing large text collections. A small number of multilingual topic models have recently been explored to discover latent topics among parallel or comparable documents, such as in Wikipedia. Other topic models that were originally proposed for structured data are also applicable to multilingual documents. Correspondence Latent Dirichlet Allocation (CorrLDA) is one such model; however, it requires a pivot language to be specified in advance. We propose a new topic model, Symmetric Correspondence LDA (SymCorrLDA), that incorporates a hidden variable to control a pivot language, in an extension of CorrLDA. We experimented with two multilingual comparable datasets extracted from Wikipedia and demonstrate that SymCorrLDA is more effective than some other existing multilingual topic models. 1

3 0.6369738 12 nips-2012-A Neural Autoregressive Topic Model

Author: Hugo Larochelle, Stanislas Lauly

Abstract: We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm. 1

4 0.63503242 124 nips-2012-Factorial LDA: Sparse Multi-Dimensional Text Models

Author: Michael Paul, Mark Dredze

Abstract: Latent variable models can be enriched with a multi-dimensional structure to consider the many latent factors in a text corpus, such as topic, author perspective and sentiment. We introduce factorial LDA, a multi-dimensional model in which a document is influenced by K different factors, and each word token depends on a K-dimensional vector of latent variables. Our model incorporates structured word priors and learns a sparse product of factors. Experiments on research abstracts show that our model can learn latent factors such as research topic, scientific discipline, and focus (methods vs. applications). Our modeling improvements reduce test perplexity and improve human interpretability of the discovered factors. 1

5 0.63216716 154 nips-2012-How They Vote: Issue-Adjusted Models of Legislative Behavior

Author: Sean Gerrish, David M. Blei

Abstract: We develop a probabilistic model of legislative data that uses the text of the bills to uncover lawmakers’ positions on specific political issues. Our model can be used to explore how a lawmaker’s voting patterns deviate from what is expected and how that deviation depends on what is being voted on. We derive approximate posterior inference algorithms based on variational methods. Across 12 years of legislative data, we demonstrate both improvement in heldout predictive performance and the model’s utility in interpreting an inherently multi-dimensional space. 1

6 0.59446621 166 nips-2012-Joint Modeling of a Matrix with Associated Text via Latent Binary Features

7 0.57404208 253 nips-2012-On Triangular versus Edge Representations --- Towards Scalable Modeling of Networks

8 0.54456633 77 nips-2012-Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models

9 0.54058439 220 nips-2012-Monte Carlo Methods for Maximum Margin Supervised Topic Models

10 0.52192831 354 nips-2012-Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes

11 0.50652719 274 nips-2012-Priors for Diversity in Generative Latent Variable Models

12 0.50640464 298 nips-2012-Scalable Inference of Overlapping Communities

13 0.49732319 19 nips-2012-A Spectral Algorithm for Latent Dirichlet Allocation

14 0.42727387 47 nips-2012-Augment-and-Conquer Negative Binomial Processes

15 0.41854125 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

16 0.40942007 39 nips-2012-Analog readout for optical reservoir computers

17 0.40356219 52 nips-2012-Bayesian Nonparametric Modeling of Suicide Attempts

18 0.38708407 194 nips-2012-Learning to Discover Social Circles in Ego Networks

19 0.38088149 346 nips-2012-Topology Constraints in Graphical Models

20 0.35505572 132 nips-2012-Fiedler Random Fields: A Large-Scale Spectral Approach to Statistical Network Modeling


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.082), (17, 0.019), (21, 0.026), (38, 0.069), (39, 0.014), (41, 0.344), (42, 0.017), (54, 0.033), (55, 0.05), (74, 0.047), (76, 0.097), (80, 0.08), (92, 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.7025578 345 nips-2012-Topic-Partitioned Multinetwork Embeddings

Author: Peter Krafft, Juston Moore, Bruce Desmarais, Hanna M. Wallach

Abstract: We introduce a new Bayesian admixture model intended for exploratory analysis of communication networks—specifically, the discovery and visualization of topic-specific subnetworks in email data sets. Our model produces principled visualizations of email networks, i.e., visualizations that have precise mathematical interpretations in terms of our model and its relationship to the observed data. We validate our modeling assumptions by demonstrating that our model achieves better link prediction performance than three state-of-the-art network models and exhibits topic coherence comparable to that of latent Dirichlet allocation. We showcase our model’s ability to discover and visualize topic-specific communication patterns using a new email data set: the New Hanover County email network. We provide an extensive analysis of these communication patterns, leading us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, we advocate for principled visualization as a primary objective in the development of new network models. 1

2 0.68872595 72 nips-2012-Cocktail Party Processing via Structured Prediction

Author: Yuxuan Wang, Deliang Wang

Abstract: While human listeners excel at selectively attending to a conversation in a cocktail party, machine performance is still far inferior by comparison. We show that the cocktail party problem, or the speech separation problem, can be effectively approached via structured prediction. To account for temporal dynamics in speech, we employ conditional random fields (CRFs) to classify speech dominance within each time-frequency unit for a sound mixture. To capture complex, nonlinear relationship between input and output, both state and transition feature functions in CRFs are learned by deep neural networks. The formulation of the problem as classification allows us to directly optimize a measure that is well correlated with human speech intelligibility. The proposed system substantially outperforms existing ones in a variety of noises.

3 0.49454722 280 nips-2012-Proper losses for learning from partial labels

Author: Jesús Cid-sueiro

Abstract: This paper discusses the problem of calibrating posterior class probabilities from partially labelled data. Each instance is assumed to be labelled as belonging to one of several candidate categories, at most one of them being true. We generalize the concept of proper loss to this scenario, we establish a necessary and sufficient condition for a loss function to be proper, and we show a direct procedure to construct a proper loss for partial labels from a conventional proper loss. The problem can be characterized by the mixing probability matrix relating the true class of the data and the observed labels. The full knowledge of this matrix is not required, and losses can be constructed that are proper for a wide set of mixing probability matrices. 1

4 0.44290078 192 nips-2012-Learning the Dependency Structure of Latent Factors

Author: Yunlong He, Yanjun Qi, Koray Kavukcuoglu, Haesun Park

Abstract: In this paper, we study latent factor models with dependency structure in the latent space. We propose a general learning framework which induces sparsity on the undirected graphical model imposed on the vector of latent factors. A novel latent factor model SLFA is then proposed as a matrix factorization problem with a special regularization term that encourages collaborative reconstruction. The main benefit (novelty) of the model is that we can simultaneously learn the lowerdimensional representation for data and model the pairwise relationships between latent factors explicitly. An on-line learning algorithm is devised to make the model feasible for large-scale learning problems. Experimental results on two synthetic data and two real-world data sets demonstrate that pairwise relationships and latent factors learned by our model provide a more structured way of exploring high-dimensional data, and the learned representations achieve the state-of-the-art classification performance. 1

5 0.44172552 191 nips-2012-Learning the Architecture of Sum-Product Networks Using Clustering on Variables

Author: Aaron Dennis, Dan Ventura

Abstract: The sum-product network (SPN) is a recently-proposed deep model consisting of a network of sum and product nodes, and has been shown to be competitive with state-of-the-art deep models on certain difficult tasks such as image completion. Designing an SPN network architecture that is suitable for the task at hand is an open question. We propose an algorithm for learning the SPN architecture from data. The idea is to cluster variables (as opposed to data instances) in order to identify variable subsets that strongly interact with one another. Nodes in the SPN network are then allocated towards explaining these interactions. Experimental evidence shows that learning the SPN architecture significantly improves its performance compared to using a previously-proposed static architecture. 1

6 0.43993607 233 nips-2012-Multiresolution Gaussian Processes

7 0.43458331 354 nips-2012-Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes

8 0.43456542 270 nips-2012-Phoneme Classification using Constrained Variational Gaussian Process Dynamical System

9 0.43318114 172 nips-2012-Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs

10 0.43128598 104 nips-2012-Dual-Space Analysis of the Sparse Linear Model

11 0.42981571 342 nips-2012-The variational hierarchical EM algorithm for clustering hidden Markov models

12 0.42907718 306 nips-2012-Semantic Kernel Forests from Multiple Taxonomies

13 0.42832598 274 nips-2012-Priors for Diversity in Generative Latent Variable Models

14 0.42808068 355 nips-2012-Truncation-free Online Variational Inference for Bayesian Nonparametric Models

15 0.42727476 12 nips-2012-A Neural Autoregressive Topic Model

16 0.42682344 193 nips-2012-Learning to Align from Scratch

17 0.42661932 215 nips-2012-Minimizing Uncertainty in Pipelines

18 0.42589715 197 nips-2012-Learning with Recursive Perceptual Representations

19 0.422959 77 nips-2012-Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models

20 0.42282131 188 nips-2012-Learning from Distributions via Support Measure Machines