nips nips2012 nips2012-219 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Charles Blundell, Jeff Beck, Katherine A. Heller
Abstract: We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data. Social groups are often formed implicitly, through actions among members of groups. Yet many models of social networks use explicitly declared relationships to infer social structure. We consider a particular class of Hawkes processes, a doubly stochastic point process, that is able to model reciprocity between groups of individuals. We then extend the Infinite Relational Model by using these reciprocating Hawkes processes to parameterise its edges, making events associated with edges co-dependent through time. Our model outperforms general, unstructured Hawkes processes as well as structured Poisson process-based models at predicting verbal and email turn-taking, and military conflicts among nations. 1
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data. [sent-11, score-0.156]
2 Social groups are often formed implicitly, through actions among members of groups. [sent-12, score-0.098]
3 Yet many models of social networks use explicitly declared relationships to infer social structure. [sent-13, score-0.303]
4 We consider a particular class of Hawkes processes, a doubly stochastic point process, that is able to model reciprocity between groups of individuals. [sent-14, score-0.124]
5 We then extend the Infinite Relational Model by using these reciprocating Hawkes processes to parameterise its edges, making events associated with edges co-dependent through time. [sent-15, score-0.426]
6 Our model outperforms general, unstructured Hawkes processes as well as structured Poisson process-based models at predicting verbal and email turn-taking, and military conflicts among nations. [sent-16, score-0.301]
7 1 Introduction As social animals, people constantly organise themselves into social groups. [sent-17, score-0.256]
8 These social groups can revolve around particular activities, such as sports teams, particular roles, such as store managers, or general social alliances, like gang members. [sent-18, score-0.271]
9 Understanding the dynamics of group interactions is a difficult problem that social scientists strive to address. [sent-19, score-0.168]
10 One basic problem in understanding group behaviour is that groups are often not explicitly defined, and the members must be inferred. [sent-20, score-0.12]
11 How can we predict future interactions among individuals based on these inferred groups? [sent-22, score-0.197]
12 A common approach is to infer groups, or clusters, of people based upon a declared relationship between pairs of individuals [1, 2, 3, 4]. [sent-23, score-0.237]
13 For example, data from social networks, where two people declare that they are “friends” or in each others’ social “neighbourhood”, can potentially be used. [sent-24, score-0.256]
14 However these declared relationships are not necessarily readily available, truthful, or pertinent to inferring the social group structure of interest. [sent-25, score-0.195]
15 In this paper we instead propose an approach to inferring social groups based directly on a set of real interactions between people. [sent-26, score-0.199]
16 If we are interested in capturing groups that best reflect human behaviour we should be determining the groups from instances of that same behaviour. [sent-28, score-0.131]
17 We develop a model which can learn social group structure based on interactions data. [sent-29, score-0.168]
18 As examples, the actions we consider are that of one person sending an email to another, one person speaking to another, or one country engaging in military action towards another. [sent-31, score-0.21]
19 The key property that we leverage to infer social groups is reciprocity. [sent-32, score-0.183]
20 Reciprocity is a common social norm, where one person’s actions towards another increases the probability of the same type of action being returned. [sent-33, score-0.109]
21 For example, if Bob emails Alice, it increases the probability that Alice will email Bob in the near future. [sent-34, score-0.093]
22 When multiple people show a similar pattern of reciprocity, our model will place these people in their own group. [sent-36, score-0.076]
23 The Bayesian nonparametric model we use on these time-series data is generative and accounts for the rate of events between clusters of individuals. [sent-37, score-0.334]
24 It is built upon mutually-exciting point processes, known as Hawkes processes [5, 6]. [sent-38, score-0.113]
25 Pairs of mutually-exciting Hawkes processes are able to capture the causal nature of reciprocal interactions. [sent-39, score-0.169]
26 Here the processes excite one another through their actualised events. [sent-40, score-0.113]
27 Since Poisson processes are a special case of Hawkes processes, our model is also able to capture simpler one-way, non-reciprocal, relationships as well. [sent-41, score-0.134]
28 The IRM typically assumes that there is a fixed graph, or social network, which is observed. [sent-43, score-0.109]
29 Here we are interested in inferring the implicit social structure based only on the occurrences of interactions between vertices in the graph. [sent-44, score-0.167]
30 We apply our model to reciprocal behaviour in verbal and email conversations and to military conflicts among nations. [sent-45, score-0.27]
31 The remainder of the paper is organised as follows: section 2 discusses using Poisson processes together with the IRM. [sent-46, score-0.113]
32 2 Poisson processes with the Infinite Relational Model The Infinite Relational Model (IRM) [1, 2] was developed to model relationships among entities as graphs, based upon previously declared relationships. [sent-49, score-0.202]
33 Let V denote the vertices of the graph, corresponding to individuals, and let euv denote the presence or absence of a relationship between vertices u and v, corresponding to an edge in the graph. [sent-50, score-0.091]
34 Hence vertex u belongs to the cluster given by π(u), and consequently, the clusters in π are given by range(π). [sent-52, score-0.125]
35 Often in interaction data there are many instances of interactions between the same pair of individuals–this cannot be modelled by the IRM. [sent-54, score-0.117]
36 Unfortunately, a vanilla Gamma-Poisson observation model does not allow us to predict events into the future, outside the observed time window. [sent-56, score-0.256]
37 We shall consider Poisson processes on [0, ∞), such that the number of events in any interval [s, s ) of the real-half line, denoted N [s, s ), is Poisson distributed with rate λ(s − s). [sent-60, score-0.397]
38 The graph in the top left shows the clusters and edge weights learned by our model from the data in the bottom right plot. [sent-68, score-0.097]
39 The top right plot shows the rates of interaction events between clusters. [sent-69, score-0.303]
40 In the graph, the width and temperature (how red the colour is) denotes the expected rate of events between pairs of clusters (using equations (9) and (10)). [sent-71, score-0.356]
41 While in plots on the right, line colours indicates the identity of cluster pairs, and box colours indicate the originator of the event: Alice (red), Bob (blue), Mallory (black). [sent-72, score-0.121]
42 Only after many events caused by Mallory do Alice or Bob respond, and when they do respond they both, similarly, respond more sparsely. [sent-75, score-0.306]
43 Inference proceeds by conditioning on, Nuv [0, T ) = nuv where nuv is the total number of events directed from u to v in the given time interval. [sent-77, score-0.568]
44 There are two notable deficiencies of this model: the rate of events on each edge is independent of every other edge, and conditioned on the time interval containing all observed events, the times of these events are uniformly distributed. [sent-79, score-0.564]
45 If I send an email to someone, it is more likely that I will receive an email from them than had I not sent an email, and the probability of receiving a reply decreases as time advances. [sent-81, score-0.186]
46 These processes are intuitively similar to Poisson processes, but unlike Poisson processes, the rates of Hawkes processes depend upon their own historic events and those of other processes in an excitatory fashion. [sent-84, score-0.595]
47 We shall consider an array of K × K Hawkes processes, where K is the number of clusters in a partition drawn from a CRP restricted to the individuals V . [sent-85, score-0.185]
48 As in the IRM, the CRP allows the 3 number of processes to grow in an unconstrained manner as the number of individuals in the graph grows. [sent-86, score-0.271]
49 However, unlike the IRM, these Hawkes processes will be pairwise-dependent: the Hawkes process governing events from cluster p to cluster q, will depend upon the Hawkes process governing events from cluster q to cluster p. [sent-87, score-1.041]
50 Each Hawkes process is a point process whose rate at time t is given by: t gpq (t − s)dNqp (s) λpq (t) = γpq np nq + (7) −∞ where γpq is the base rate of the counting measure of the Hawkes, process, Npq . [sent-89, score-0.315]
51 np and nq are the number of individuals in cluster p and q respectively, and gpq is a non-negative function such that ∞ gpq (s)ds < 1, ensuring that Npq is stationary. [sent-90, score-0.45]
52 Nqp is the counting measure of the reciprocating 0 Hawkes process of Npq . [sent-91, score-0.124]
53 Intuitively, if Npq governs events from cluster p to cluster q, then Nqp governs events from cluster q to cluster p. [sent-92, score-0.812]
54 Equation (7) shows how the rates of events in these two processes are intimately intertwined. [sent-93, score-0.369]
55 Since Nqp is an atomic measure, whose atoms correspond to the times of events, we can express the rate of Npq given in (7), by conditioning on the events of its reciprocating processes Nqp , as: gpq (t − tqp ) i λpq (t) = γpq np nq + (8) i:tqp < 1 for the process to be stationary. [sent-94, score-0.703]
56 Henceforth we will use uniform thinning—each event in Npq (·) is assigned uniformly at random among all Nuv (·) where p = π(u) and q = π(v)—but in principle any thinning scheme may be used. [sent-97, score-0.094]
57 For a Hawkes process Npq , the rate at which no events occurs in the interval [s, s ) is: e− s s λpq (t)dt (15) Suppose we observe the times of all the events in [0, T ), {tuv }nuv for process Nuv (nuv being the i i=1 total number of events from u to v in [0, T )). [sent-98, score-0.87]
58 Suppose that individual u is in cluster p and that individual v is in cluster q. [sent-99, score-0.15]
59 Furthermore, assume there are no events before time 0. [sent-100, score-0.256]
60 The likelihood of each edge between individuals u and v is thus: n qp p({tuv }nuv |θpq , {tqp }i=1 ) = e i i=1 i − np1 q n T 0 nuv λpq (t)dt i=1 λpq (tuv ) i np nq (16) n qp where θpq = (γpq , βpq , τpq ), {tqp }i=1 are the times of the reciprocal events. [sent-101, score-0.419]
61 To infer the partition of individuals π, the concentration parameter α, and the parameters of each Hawkes process θpq = (γpq , βpq , τpq ), we use Algorithm 5 [11] adapted to the IRM and slice sampling [12] to draw samples from the posterior. [sent-106, score-0.22]
62 pq pq 6 Related work Several authors have considered modelling occurrence events [13, 14, 15] using piecewise constant rate Markov point processes for known number of event types. [sent-110, score-1.194]
63 Our work directly models interaction 5 events (where an event is structured to have a sender and recipient) and the number of possible events types is not limited. [sent-111, score-0.63]
64 [16] describes a model of occurrence events as a discrete time-series using a latent first-order Markov model. [sent-112, score-0.256]
65 Our model differs in that it considers interaction events in continuous time and requires no first-order assumption. [sent-113, score-0.303]
66 The model in Section 2 relates the work of [17] to the IRM [1], yielding a version of their model that learns the number of clusters whilst maintaining conjugacy. [sent-114, score-0.078]
67 However our model does not use a Poisson process to model event times, instead using processes which have a time-varying rate. [sent-115, score-0.186]
68 Hawkes processes are also the basis of this work, however our work does not use side-channel information to group individuals by imposing fixed marks on the process; instead we learn structure among several co-dependent Hawkes processes and use Bayesian inference for the parameters and structure. [sent-117, score-0.408]
69 The interest is in modelling the activation and co-activation of neurons and as such they do not directly model cluster structure among the neurons, while our model does model this structure. [sent-119, score-0.121]
70 We compared these models quantitatively by comparing their log predictive densities (with respect to the space of ordered sequences of events) on events falling in the final 10% of the total time of the data (Table 2). [sent-127, score-0.256]
71 We normalised the times of all events such that the first 90% of total time lay in the interval [0, 1]. [sent-128, score-0.256]
72 The data involves three individuals and is plotted in Figure 1. [sent-131, score-0.135]
73 The Poisson IRM is uncertain how to cluster individuals as it cannot model the temporal dependence between individuals, while the Hawkes IRM can and so performs better at prediction as well. [sent-133, score-0.21]
74 A single Hawkes process does not model the structure among individuals and so performs worse than the Hawkes IRM, although it is able to model dependence among events. [sent-134, score-0.222]
75 Enron email threads We took the five longest threads from the Enron 2009 data set [21]. [sent-135, score-0.171]
76 All of these threads involve two different people so there is little scope for learning much group structure in these data: either both people are in the same cluster, or they are in two separate clusters. [sent-137, score-0.137]
77 However as can be seen in Table 2 these data suggest a predictive advantage to using mutually-exciting Hawkes processes, as automatically determined by our model, instead of a single self-exciting Hawkes process and of both of these approaches over their corresponding Poisson processes model. [sent-138, score-0.15]
78 A self-exciting Hawkes process is unable to mark the sender and receiver of events as differing, whilst Poisson process-based models are unable to model the causal structure of events. [sent-139, score-0.378]
79 These con6 Gran$ Rose$ Dan$ KUW$ USA$ Pa,$ Bern$ X$ Cher$ Paul$$ Many$ Dere$ Kare$ AFG$ TAW$ IRQ$ RUS$ CHN$ Figure 2: Graphs of clusters of individuals inferred by our model. [sent-142, score-0.185]
80 Edge width and temperature (how red the colour is) denotes the expected rate of events between pairs of clusters (using equations (9) and (10); edges whose marginal rate is below 1 are not included). [sent-143, score-0.384]
81 On the left is the graph inferred on the “SB conv 26” data set. [sent-144, score-0.127]
82 versations cover a variety of social situations: questions during a university lecture (12), a book discussion group (23), a meeting among city officials (26), a family argument/discussion (33), and a conversation at a family birthday party (49). [sent-146, score-0.237]
83 We modelled the turn-taking behaviour of these groups by taking the times of when one speaker switched to the next. [sent-147, score-0.111]
84 In Figure 2(left) we show the cluster graph found by our model for conversation 26, involving city officials discussing a grant application. [sent-148, score-0.156]
85 Incidents vary from diplomatic threats of military force to the actual deployment of military force against another state. [sent-154, score-0.098]
86 For exposition purposes, we show the graph (in Figure 2(right)) on part of the MID data set, by restricting to events among the USA, Kuwait, Afghanistan, Taiwan, Russia, China, and Iraq. [sent-158, score-0.304]
87 Thicker and redder lines between clusters (computed from equations 9 and 10) reflect a higher rate of incidents directed between the countries along the edge. [sent-159, score-0.225]
88 There were three main conflicts involving the countries we modelled during the time period this data covers. [sent-161, score-0.109]
89 1) Revolved mostly around border disputes coming out of the Soviet war in Afghanistan, and incidents sometimes involved using former Soviet countries as proxies. [sent-163, score-0.246]
90 It is interesting to note that groups involving smaller countries were found to be more likely to initiate incidents with larger countries in a dispute (e. [sent-166, score-0.304]
91 7 Synthetic Small MID Full MID Enron 0 Enron 1 Enron 2 Enron 3 Enron 4 SB conv 23 SB conv 26 SB conv 12 SB conv 49 SB conv 33 N 3 7 82 2 2 2 2 2 18 11 12 11 10 T 239 57 412 896 204 122 117 85 832 95 133 620 499 Hawkes IRM E[K] log probability 2. [sent-170, score-0.52]
92 N denotes the number of individuals in the data set. [sent-249, score-0.135]
93 T denotes the total number of events in the data set. [sent-250, score-0.256]
94 Synthetic Small MID Full MID Enron 0 Enron 1 Enron 2 Enron 3 Enron 4 SB conv 23 SB conv 26 SB conv 12 SB conv 49 SB conv 33 Hawkes IRM 43. [sent-253, score-0.52]
95 The intuition behind why our model works well is that it captures part of the reciprocal nature of interactions among individuals in social situations, which in turn requires modelling some of the causal relationship of events. [sent-358, score-0.383]
96 For example, individuals might contribute to groups differently to one another. [sent-361, score-0.188]
97 There may be different kinds of events between individuals and other side-channel information. [sent-362, score-0.391]
98 It would be interesting to consider other parameterisations of gpq (·) that, for example, include periods of delay between reciprocation; the exponential parameterisation lends itself to efficient computation [10] whilst other parameterisations do not necessarily have this property. [sent-364, score-0.189]
99 But different choices of gpq (·) may yield better statistical models. [sent-365, score-0.085]
100 Another interesting avenue is to explore other structure amongst interaction events using Hawkes processes, beyond reciprocity. [sent-366, score-0.303]
wordName wordTfidf (topN-words)
[('hawkes', 0.566), ('pq', 0.37), ('events', 0.256), ('irm', 0.25), ('alice', 0.187), ('mallory', 0.17), ('bob', 0.161), ('nuv', 0.156), ('poisson', 0.143), ('enron', 0.141), ('individuals', 0.135), ('npq', 0.113), ('processes', 0.113), ('sb', 0.11), ('social', 0.109), ('conv', 0.104), ('mid', 0.099), ('email', 0.093), ('gpq', 0.085), ('countries', 0.076), ('cluster', 0.075), ('incidents', 0.071), ('reciprocity', 0.071), ('icts', 0.058), ('conversation', 0.058), ('afghanistan', 0.057), ('disputes', 0.057), ('kuwait', 0.057), ('nqp', 0.057), ('reciprocating', 0.057), ('tqp', 0.057), ('groups', 0.053), ('clusters', 0.05), ('military', 0.049), ('crp', 0.049), ('interaction', 0.047), ('relational', 0.046), ('declared', 0.043), ('tuv', 0.042), ('war', 0.042), ('nq', 0.041), ('threads', 0.039), ('people', 0.038), ('russia', 0.037), ('iraq', 0.037), ('barbara', 0.037), ('process', 0.037), ('interactions', 0.037), ('event', 0.036), ('sender', 0.035), ('person', 0.034), ('reciprocal', 0.034), ('modelled', 0.033), ('ict', 0.033), ('santa', 0.033), ('taiwan', 0.033), ('thinning', 0.033), ('counting', 0.03), ('np', 0.029), ('cials', 0.028), ('dispute', 0.028), ('gran', 0.028), ('parameterisations', 0.028), ('rate', 0.028), ('whilst', 0.028), ('correlates', 0.028), ('slice', 0.027), ('david', 0.027), ('behaviour', 0.025), ('euv', 0.025), ('soviet', 0.025), ('dubois', 0.025), ('simma', 0.025), ('usa', 0.025), ('among', 0.025), ('respond', 0.025), ('edge', 0.024), ('corpus', 0.024), ('graph', 0.023), ('meeting', 0.023), ('colours', 0.023), ('conversations', 0.023), ('alan', 0.023), ('charles', 0.022), ('causal', 0.022), ('colour', 0.022), ('rochester', 0.022), ('liam', 0.022), ('group', 0.022), ('modelling', 0.021), ('china', 0.021), ('relationships', 0.021), ('infer', 0.021), ('governing', 0.021), ('verbal', 0.021), ('vertices', 0.021), ('members', 0.02), ('parameterisation', 0.02), ('recipient', 0.02), ('chair', 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 1.0 219 nips-2012-Modelling Reciprocating Relationships with Hawkes Processes
Author: Charles Blundell, Jeff Beck, Katherine A. Heller
Abstract: We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data. Social groups are often formed implicitly, through actions among members of groups. Yet many models of social networks use explicitly declared relationships to infer social structure. We consider a particular class of Hawkes processes, a doubly stochastic point process, that is able to model reciprocity between groups of individuals. We then extend the Infinite Relational Model by using these reciprocating Hawkes processes to parameterise its edges, making events associated with edges co-dependent through time. Our model outperforms general, unstructured Hawkes processes as well as structured Poisson process-based models at predicting verbal and email turn-taking, and military conflicts among nations. 1
2 0.13266297 215 nips-2012-Minimizing Uncertainty in Pipelines
Author: Nilesh Dalvi, Aditya Parameswaran, Vibhor Rastogi
Abstract: In this paper, we consider the problem of debugging large pipelines by human labeling. We represent the execution of a pipeline using a directed acyclic graph of AND and OR nodes, where each node represents a data item produced by some operator in the pipeline. We assume that each operator assigns a confidence to each of its output data. We want to reduce the uncertainty in the output by issuing queries to a human, where a query consists of checking if a given data item is correct. In this paper, we consider the problem of asking the optimal set of queries to minimize the resulting output uncertainty. We perform a detailed evaluation of the complexity of the problem for various classes of graphs. We give efficient algorithms for the problem for trees, and show that, for a general dag, the problem is intractable. 1
3 0.076605499 356 nips-2012-Unsupervised Structure Discovery for Semantic Analysis of Audio
Author: Sourish Chaudhuri, Bhiksha Raj
Abstract: Approaches to audio classification and retrieval tasks largely rely on detectionbased discriminative models. We submit that such models make a simplistic assumption in mapping acoustics directly to semantics, whereas the actual process is likely more complex. We present a generative model that maps acoustics in a hierarchical manner to increasingly higher-level semantics. Our model has two layers with the first layer modeling generalized sound units with no clear semantic associations, while the second layer models local patterns over these sound units. We evaluate our model on a large-scale retrieval task from TRECVID 2011, and report significant improvements over standard baselines. 1
4 0.074872874 276 nips-2012-Probabilistic Event Cascades for Alzheimer's disease
Author: Jonathan Huang, Daniel Alexander
Abstract: Accurate and detailed models of neurodegenerative disease progression are crucially important for reliable early diagnosis and the determination of effective treatments. We introduce the ALPACA (Alzheimer’s disease Probabilistic Cascades) model, a generative model linking latent Alzheimer’s progression dynamics to observable biomarker data. In contrast with previous works which model disease progression as a fixed event ordering, we explicitly model the variability over such orderings among patients which is more realistic, particularly for highly detailed progression models. We describe efficient learning algorithms for ALPACA and discuss promising experimental results on a real cohort of Alzheimer’s patients from the Alzheimer’s Disease Neuroimaging Initiative. 1
5 0.074864723 345 nips-2012-Topic-Partitioned Multinetwork Embeddings
Author: Peter Krafft, Juston Moore, Bruce Desmarais, Hanna M. Wallach
Abstract: We introduce a new Bayesian admixture model intended for exploratory analysis of communication networks—specifically, the discovery and visualization of topic-specific subnetworks in email data sets. Our model produces principled visualizations of email networks, i.e., visualizations that have precise mathematical interpretations in terms of our model and its relationship to the observed data. We validate our modeling assumptions by demonstrating that our model achieves better link prediction performance than three state-of-the-art network models and exhibits topic coherence comparable to that of latent Dirichlet allocation. We showcase our model’s ability to discover and visualize topic-specific communication patterns using a new email data set: the New Hanover County email network. We provide an extensive analysis of these communication patterns, leading us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, we advocate for principled visualization as a primary objective in the development of new network models. 1
6 0.072663195 205 nips-2012-MCMC for continuous-time discrete-state systems
7 0.059932012 69 nips-2012-Clustering Sparse Graphs
8 0.057387214 299 nips-2012-Scalable imputation of genetic data with a discrete fragmentation-coagulation process
9 0.056543656 138 nips-2012-Fully Bayesian inference for neural models with negative-binomial spiking
10 0.056269586 126 nips-2012-FastEx: Hash Clustering with Exponential Families
11 0.047312792 315 nips-2012-Slice sampling normalized kernel-weighted completely random measure mixture models
12 0.047220781 316 nips-2012-Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models
13 0.04717467 287 nips-2012-Random function priors for exchangeable arrays with applications to graphs and relational data
14 0.041908868 115 nips-2012-Efficient high dimensional maximum entropy modeling via symmetric partition functions
15 0.040982462 97 nips-2012-Diffusion Decision Making for Adaptive k-Nearest Neighbor Classification
16 0.04038265 99 nips-2012-Dip-means: an incremental clustering method for estimating the number of clusters
17 0.039717417 182 nips-2012-Learning Networks of Heterogeneous Influence
18 0.039474703 47 nips-2012-Augment-and-Conquer Negative Binomial Processes
19 0.03807044 59 nips-2012-Bayesian nonparametric models for bipartite graphs
20 0.037893757 68 nips-2012-Clustering Aggregation as Maximum-Weight Independent Set
topicId topicWeight
[(0, 0.094), (1, 0.02), (2, -0.018), (3, 0.014), (4, -0.083), (5, -0.012), (6, -0.011), (7, -0.01), (8, -0.025), (9, 0.058), (10, 0.009), (11, -0.048), (12, -0.018), (13, -0.048), (14, 0.016), (15, -0.043), (16, -0.046), (17, -0.002), (18, -0.015), (19, -0.057), (20, -0.012), (21, 0.013), (22, -0.023), (23, -0.026), (24, -0.007), (25, -0.053), (26, 0.019), (27, 0.011), (28, -0.066), (29, -0.055), (30, 0.047), (31, 0.041), (32, -0.001), (33, 0.039), (34, -0.025), (35, 0.024), (36, 0.121), (37, -0.014), (38, 0.065), (39, 0.052), (40, -0.112), (41, 0.095), (42, -0.006), (43, 0.033), (44, -0.034), (45, 0.033), (46, -0.018), (47, 0.018), (48, 0.041), (49, -0.024)]
simIndex simValue paperId paperTitle
same-paper 1 0.92993915 219 nips-2012-Modelling Reciprocating Relationships with Hawkes Processes
Author: Charles Blundell, Jeff Beck, Katherine A. Heller
Abstract: We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data. Social groups are often formed implicitly, through actions among members of groups. Yet many models of social networks use explicitly declared relationships to infer social structure. We consider a particular class of Hawkes processes, a doubly stochastic point process, that is able to model reciprocity between groups of individuals. We then extend the Infinite Relational Model by using these reciprocating Hawkes processes to parameterise its edges, making events associated with edges co-dependent through time. Our model outperforms general, unstructured Hawkes processes as well as structured Poisson process-based models at predicting verbal and email turn-taking, and military conflicts among nations. 1
2 0.62274653 182 nips-2012-Learning Networks of Heterogeneous Influence
Author: Nan Du, Le Song, Ming Yuan, Alex J. Smola
Abstract: Information, disease, and influence diffuse over networks of entities in both natural systems and human society. Analyzing these transmission networks plays an important role in understanding the diffusion processes and predicting future events. However, the underlying transmission networks are often hidden and incomplete, and we observe only the time stamps when cascades of events happen. In this paper, we address the challenging problem of uncovering the hidden network only from the cascades. The structure discovery problem is complicated by the fact that the influence between networked entities is heterogeneous, which can not be described by a simple parametric model. Therefore, we propose a kernelbased method which can capture a diverse range of different types of influence without any prior assumption. In both synthetic and real cascade data, we show that our model can better recover the underlying diffusion network and drastically improve the estimation of the transmission functions among networked entities. 1
3 0.55051291 356 nips-2012-Unsupervised Structure Discovery for Semantic Analysis of Audio
Author: Sourish Chaudhuri, Bhiksha Raj
Abstract: Approaches to audio classification and retrieval tasks largely rely on detectionbased discriminative models. We submit that such models make a simplistic assumption in mapping acoustics directly to semantics, whereas the actual process is likely more complex. We present a generative model that maps acoustics in a hierarchical manner to increasingly higher-level semantics. Our model has two layers with the first layer modeling generalized sound units with no clear semantic associations, while the second layer models local patterns over these sound units. We evaluate our model on a large-scale retrieval task from TRECVID 2011, and report significant improvements over standard baselines. 1
4 0.53621006 59 nips-2012-Bayesian nonparametric models for bipartite graphs
Author: Francois Caron
Abstract: We develop a novel Bayesian nonparametric model for random bipartite graphs. The model is based on the theory of completely random measures and is able to handle a potentially infinite number of nodes. We show that the model has appealing properties and in particular it may exhibit a power-law behavior. We derive a posterior characterization, a generative process for network growth, and a simple Gibbs sampler for posterior simulation. Our model is shown to be well fitted to several real-world social networks. 1
5 0.53255153 276 nips-2012-Probabilistic Event Cascades for Alzheimer's disease
Author: Jonathan Huang, Daniel Alexander
Abstract: Accurate and detailed models of neurodegenerative disease progression are crucially important for reliable early diagnosis and the determination of effective treatments. We introduce the ALPACA (Alzheimer’s disease Probabilistic Cascades) model, a generative model linking latent Alzheimer’s progression dynamics to observable biomarker data. In contrast with previous works which model disease progression as a fixed event ordering, we explicitly model the variability over such orderings among patients which is more realistic, particularly for highly detailed progression models. We describe efficient learning algorithms for ALPACA and discuss promising experimental results on a real cohort of Alzheimer’s patients from the Alzheimer’s Disease Neuroimaging Initiative. 1
6 0.51841086 299 nips-2012-Scalable imputation of genetic data with a discrete fragmentation-coagulation process
7 0.47206062 205 nips-2012-MCMC for continuous-time discrete-state systems
8 0.45837146 244 nips-2012-Nonconvex Penalization Using Laplace Exponents and Concave Conjugates
9 0.45219713 73 nips-2012-Coding efficiency and detectability of rate fluctuations with non-Poisson neuronal firing
10 0.42788818 196 nips-2012-Learning with Partially Absorbing Random Walks
11 0.42507958 53 nips-2012-Bayesian Pedigree Analysis using Measure Factorization
12 0.42182127 69 nips-2012-Clustering Sparse Graphs
13 0.42177129 47 nips-2012-Augment-and-Conquer Negative Binomial Processes
14 0.4211551 342 nips-2012-The variational hierarchical EM algorithm for clustering hidden Markov models
15 0.42033842 336 nips-2012-The Coloured Noise Expansion and Parameter Estimation of Diffusion Processes
16 0.41698799 155 nips-2012-Human memory search as a random walk in a semantic network
17 0.38984725 150 nips-2012-Hierarchical spike coding of sound
18 0.38434488 215 nips-2012-Minimizing Uncertainty in Pipelines
19 0.38366872 194 nips-2012-Learning to Discover Social Circles in Ego Networks
20 0.38348505 60 nips-2012-Bayesian nonparametric models for ranked data
topicId topicWeight
[(0, 0.051), (21, 0.025), (38, 0.052), (41, 0.013), (42, 0.43), (53, 0.01), (54, 0.019), (55, 0.012), (74, 0.037), (76, 0.136), (80, 0.081), (92, 0.021)]
simIndex simValue paperId paperTitle
1 0.82519948 289 nips-2012-Recognizing Activities by Attribute Dynamics
Author: Weixin Li, Nuno Vasconcelos
Abstract: In this work, we consider the problem of modeling the dynamic structure of human activities in the attributes space. A video sequence is Ä?Ĺš rst represented in a semantic feature space, where each feature encodes the probability of occurrence of an activity attribute at a given time. A generative model, denoted the binary dynamic system (BDS), is proposed to learn both the distribution and dynamics of different activities in this space. The BDS is a non-linear dynamic system, which extends both the binary principal component analysis (PCA) and classical linear dynamic systems (LDS), by combining binary observation variables with a hidden Gauss-Markov state process. In this way, it integrates the representation power of semantic modeling with the ability of dynamic systems to capture the temporal structure of time-varying processes. An algorithm for learning BDS parameters, inspired by a popular LDS learning method from dynamic textures, is proposed. A similarity measure between BDSs, which generalizes the BinetCauchy kernel for LDS, is then introduced and used to design activity classiÄ?Ĺš ers. The proposed method is shown to outperform similar classiÄ?Ĺš ers derived from the kernel dynamic system (KDS) and state-of-the-art approaches for dynamics-based or attribute-based action recognition. 1
same-paper 2 0.80965996 219 nips-2012-Modelling Reciprocating Relationships with Hawkes Processes
Author: Charles Blundell, Jeff Beck, Katherine A. Heller
Abstract: We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data. Social groups are often formed implicitly, through actions among members of groups. Yet many models of social networks use explicitly declared relationships to infer social structure. We consider a particular class of Hawkes processes, a doubly stochastic point process, that is able to model reciprocity between groups of individuals. We then extend the Infinite Relational Model by using these reciprocating Hawkes processes to parameterise its edges, making events associated with edges co-dependent through time. Our model outperforms general, unstructured Hawkes processes as well as structured Poisson process-based models at predicting verbal and email turn-taking, and military conflicts among nations. 1
3 0.79745078 242 nips-2012-Non-linear Metric Learning
Author: Dor Kedem, Stephen Tyree, Fei Sha, Gert R. Lanckriet, Kilian Q. Weinberger
Abstract: In this paper, we introduce two novel metric learning algorithms, χ2 -LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ2 -LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ2 -distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ2 -LMNN, obtain best results in 19 out of 20 learning settings. 1
4 0.73582059 243 nips-2012-Non-parametric Approximate Dynamic Programming via the Kernel Method
Author: Nikhil Bhat, Vivek Farias, Ciamac C. Moallemi
Abstract: This paper presents a novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees. In particular, we establish both theoretically and computationally that our proposal can serve as a viable alternative to state-of-the-art parametric ADP algorithms, freeing the designer from carefully specifying an approximation architecture. We accomplish this by developing a kernel-based mathematical program for ADP. Via a computational study on a controlled queueing network, we show that our procedure is competitive with parametric ADP approaches. 1
5 0.7213909 134 nips-2012-Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods
Author: Andre Wibisono, Martin J. Wainwright, Michael I. Jordan, John C. Duchi
Abstract: We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates. We show that if pairs of function values are available, algorithms that √ gradient estimates based on random perturbations suffer a factor use of at most d in convergence rate over traditional stochastic gradient methods, where d is the problem dimension. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, which show that our bounds are sharp with respect to all problemdependent quantities: they cannot be improved by more than constant factors. 1
6 0.53476316 98 nips-2012-Dimensionality Dependent PAC-Bayes Margin Bound
7 0.51951754 275 nips-2012-Privacy Aware Learning
8 0.51448894 265 nips-2012-Parametric Local Metric Learning for Nearest Neighbor Classification
9 0.51188421 162 nips-2012-Inverse Reinforcement Learning through Structured Classification
10 0.50576711 9 nips-2012-A Geometric take on Metric Learning
11 0.50555462 307 nips-2012-Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning
12 0.50544578 171 nips-2012-Latent Coincidence Analysis: A Hidden Variable Model for Distance Metric Learning
13 0.49878541 281 nips-2012-Provable ICA with Unknown Gaussian Noise, with Implications for Gaussian Mixtures and Autoencoders
14 0.49589404 263 nips-2012-Optimal Regularized Dual Averaging Methods for Stochastic Optimization
15 0.49304584 325 nips-2012-Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions
16 0.49190602 285 nips-2012-Query Complexity of Derivative-Free Optimization
17 0.48342633 292 nips-2012-Regularized Off-Policy TD-Learning
18 0.48175758 157 nips-2012-Identification of Recurrent Patterns in the Activation of Brain Networks
19 0.48030162 217 nips-2012-Mixability in Statistical Learning
20 0.48000666 13 nips-2012-A Nonparametric Conjugate Prior Distribution for the Maximizing Argument of a Noisy Function