nips nips2006 nips2006-190 knowledge-graph by maker-knowledge-mining

190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields


Source: pdf

Author: Thomas Ott, Ruedi Stoop

Abstract: We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield network can be derived from the equations of belief propagation on a binary Markov random field. As Hopfield networks are equipped with a Lyapunov function, convergence is guaranteed. As a consequence, in the limit of many weak connections per neuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 ch Abstract We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield network can be derived from the equations of belief propagation on a binary Markov random field. [sent-7, score-0.708]

2 As Hopfield networks are equipped with a Lyapunov function, convergence is guaranteed. [sent-8, score-0.109]

3 As a consequence, in the limit of many weak connections per neuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. [sent-9, score-0.54]

4 Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks. [sent-10, score-0.308]

5 1 Introduction Real brain structures employ inference algorithms as a basis of decision making. [sent-11, score-0.062]

6 Belief Propagation (BeP) is a popular, widely applicable inference algorithm that seems particularly suited for a neural implementation. [sent-12, score-0.071]

7 The algorithm is based on message passing between distributed elements that resembles the signal transduction within a neural network. [sent-13, score-0.299]

8 The analogy between BeP and neural networks is emphasised if BeP is formulated within the framework of Markov random fields (MRF). [sent-14, score-0.139]

9 MRF are related to spin models [1] that are often used as abstract models of neural networks with symmetric synaptic weights. [sent-15, score-0.351]

10 If a neural implementation of BeP can be realised on the basis of MRF, each neuron corresponds to a message passing element (hidden node of a MRF) and the synaptic weights reflect their pairwise dependencies. [sent-16, score-0.582]

11 The neural activity then would encode the messages that are passed between connected nodes. [sent-17, score-0.226]

12 Due to the highly recurrent nature of biological neural networks, MRF obtained in this correspondence to a neural network are naturally very “loopy”. [sent-18, score-0.163]

13 Convergence of BeP on loopy structures is, however, a delicate matter [1]-[2] . [sent-19, score-0.052]

14 Here, we show that BeP on binary MRF can be reformulated as continuous Hopfield networks along the lines of the sketched correspondence. [sent-20, score-0.189]

15 More precisely, the equations of a continuous Hopfield network are derived from the equations of BeP on a binary MRF, if there are many, but weak connections per neuron. [sent-21, score-0.233]

16 As a central result in this case, attractive fixed points of the Hopfield network provide very good approximations of BeP fixed points of the corresponding MRF. [sent-22, score-0.085]

17 In the Hopfield case a Lyapunov function guarantees the convergence towards these fixed points. [sent-23, score-0.048]

18 As a consequence, Hopfield networks implement BeP with guaranteed convergence. [sent-24, score-0.116]

19 The result of the inference is directly represented by the activity of the neurons in the steady state. [sent-25, score-0.086]

20 To illustrate this mechanism, we compare the magnetisations obtained in the original BeP framework to that from the Hopfield network framework, for a symmetric ferromagnetic model. [sent-26, score-0.254]

21 Hopfield networks may also serve as a guideline for the implementation or the detection of BeP in more realistic, e. [sent-27, score-0.105]

22 By giving up the symmetric synaptic weights constraints, we may generalise the original BeP inference algorithm towards capturing neurally inspired message passing. [sent-30, score-0.447]

23 2 A Quick Review on Belief Propagation in Markov Random Fields MRF have been used to formulate inference problems, e. [sent-31, score-0.039]

24 In fact, both concepts are equivalent variants of graphical models [1]. [sent-34, score-0.028]

25 For instance, the pixel values of a grey-scaled image may be represented by , whereas a particular variable describes whether pixel belongs to an object ( ) or to the background ( ). [sent-36, score-0.026]

26 (1) can directly be reformulated as an Ising system with the Energy 4  0 Q IW W'  of a spin configuration , Q ` a Q @ S § 0 YX)@ Q  W' V q p i g bBHA h7 Q' 0 H(& £ ¡ ¤¢  T A 8 UB@ 9 7 £ F   S §  Q G$R0 H' P © © ƒt  T T €@ Q  Q V @ y0)@ Q E H' D C  Q @ T 0 )@x¥EB! [sent-41, score-0.075]

27  @ V `  Q T ` The inference task inherent to MRF amounts to extracting marginal probabilities q (4) ˆ 1„ …  † 8 ‡ S 0 £ ¥ ' 3¦¢  &  0  R#¤¥ '  &  & according to Eq. [sent-46, score-0.058]

28 BeP provides An exact evaluation of us with approximated marginals within a reasonable time. [sent-48, score-0.023]

29 This approach is based on the idea that connected elements (where a connection is given by ) interchange messages that contain a recommendation about what state the other elements should be in [1]. [sent-49, score-0.191]

30 Usually, the messages are normalised at every time step, i. [sent-52, score-0.147]

31 After (5) has converged, the marginals are approximated by the so called beliefs that are calculated according to n m   & m  A g5 t 7 fI@ F r  I0  ¥ ' B@ “ 0  B  ¥ '  saR0  ¥ '  ¡ r  @” ©  0 © qp)§ ' D•“ (6) ˜ 0 @” 1© ' D•“ s where is a normalisation constant. [sent-55, score-0.101]

32 In particular in connection with Ising systems, one is primarily interested in the quantity , the so-called local magnetisation. [sent-56, score-0.044]

33 are the connection (synaptic) weights which need to be symmetric in the Hopfield model. [sent-61, score-0.128]

34 According to the sketched picture, each neuron represents a node , whereas the messages are encoded in the variables and . [sent-66, score-0.304]

35 The Hopfield architecture implements the point attractor paradigm, i. [sent-68, score-0.054]

36 , by means of the dynamics the network is driven into a fixed point. [sent-70, score-0.09]

37 We will now realise the translation from MRF into Hopfield networks as follows: and to one reparam(1) Reduction of the number of messages per connection from eterised variable . [sent-73, score-0.358]

38 (3) Translation of the obtained equations into the equations of a Hopfield network, where we find in terms of and . [sent-75, score-0.086]

39 the encoding of the variables This will establish the exact relationship between Hopfield and BeP. [sent-76, score-0.06]

40 1 Reparametrisation of the messages can be reparameterised [2] according to (8) ` ˜  I†” @ hA 7 g f T ‡ #Ht )† ‡ x e @ ‘’ … w ƒ ‚ 0 @ €xsU1D ' x„–‚ €d x„–Ž 9” 0 H' … w ƒ  … w ƒ ‚ S d ™V Œ ‹ (9) . [sent-78, score-0.166]

41 ¥ ' B€“ , the messages q In the case of binary variables The used reparametrisation translates the update rules into an additive form (“log domain”) which is a basic assumption of most models of neural networks. [sent-82, score-0.264]

42 (9) can be translated into the equivalent time-continuous system ˜ @ hA 7 g f T — ‡ jit 1† e 0 ' I†  S ‘’ … w ƒ ‚ 0 @ €xsU1D ' x„–‚ ¦d x„–‚ … w ƒ  … w ƒ ˜ — ‡ — 0 ' V $bB0 i' DŽ @ §  0 ' ‹ @ ¥ ‡ @ D ˜ ‹ x ‹ u ¬ª` ©0 H' ¥ u 0 H' u ` « —§   ¨0 '   is time-independent. [sent-85, score-0.023]

43 3 Translation into a Hopfield network @ D  v The comparison between Eq. [sent-90, score-0.065]

44 That is, a message corresponds to the presynaptic neural activity weighted by the synaptic strength. [sent-94, score-0.381]

45 1 In the following, we assume that the synaptic weights are rela. [sent-97, score-0.184]

46 , if a neuron receives many inputs (number of connections ) then the single contribution can be neglected. [sent-101, score-0.133]

47 In other words, the subset defined by is invariant under the dynamics of (14). [sent-105, score-0.025]

48 After the convergence to an attractor fixed point, the local magnetisation is simply the activity . [sent-107, score-0.209]

49 This is because the fixed point and the read out equations collapse under the , i. [sent-108, score-0.066]

50 From a biological point of view, the first two points seem reasonable. [sent-112, score-0.034]

51 The effect of a single synapse is typically small compared to the totality of the numerous synaptic inputs of a cell [7]-[8]. [sent-113, score-0.165]

52 In order to establish a firm biological correspondence, particular consideration will be required for the last point. [sent-115, score-0.072]

53 In the next section, we show that Hopfield networks are guaranteed to converge and thus, the required initialisation can be considered a natural choice for BeP on MRF with the properties (I) and (II). [sent-116, score-0.137]

54  0 R¨‘ ' Ÿ † @ V — ` 0 '   v e  h † | e ‡ r  h †  k†  0 ¨‘ ' e  v  qq thq  † pÁ À | r † )Á À | ‡  0 ¾U‘ ' Á –†  v † À  Ÿ | r À † Ÿ ‡ 3. [sent-117, score-0.153]

55 4 Guarantee of convergence A basic Hopfield model of the form  @ ¨s« (16) | ˜ — x 0 E0 ' "¥ ' B@ @  @ S ˜ —' 0 ! [sent-118, score-0.026]

56 ©§  ¥  — u — 0 '  ¥ u x 0 ¤¥ ' x„–qÃ¥ ' … w ƒ ‚  0 with , has the same attractor structure as the model (7) described above (see [6] and references therein). [sent-119, score-0.054]

57 For the former model, an explicit Lyapunov function has been constructed Æ bÄ Å Hence the synaptic weights are automatically restricted to the interval Ë É Ê É È 1k¢ Ç 1 . [sent-120, score-0.184]

58 1 Ï ÄÍ Ð•Ì 5 “ Figure 1: The magnetisation as a function of and for the symmetric ferromagnetic model. [sent-125, score-0.23]

59 The results for the original BeP (grey stars) and for the Hopfield network (black circles) are compared. [sent-126, score-0.065]

60 | [9] which assures that these networks and with them the networks considered by us are globally asymptotically stable [6]. [sent-127, score-0.192]

61 To realise this symmetric model, we may either think of an from an external field and set infinitely extended network or of a network with some spatial periodicity, e. [sent-132, score-0.249]

62 According to the last section, is related to in a spin model via , where, for convenience, we reintroduced a quasi-temperature as a scaling parameter. [sent-135, score-0.046]

63 @ D  | | µ 0 t r ˆ31© ' x„–°Ñ0 … w ƒ ‚   T ' xs° … w ƒ ‚ ‘ $  | Õ „v Ô Ó I¨v t Ô Ó  I¨v Ô Ó 0 kv  qq hhq  | T Ô Ó Ô Ó  kv  kv '  is a fixed point of the system if From Eq. [sent-136, score-0.091]

64 However, the stability of , where the bifurcation point is given by restricted to is Õ ‘ ª 3v ® h­ Ò e Ù ” œ Ø çæ )©w20¨„3© ' d x„–1$  × t æ ‘ å r … w ƒ ‚ r © q t Ù ” œ ÙØ ” œ Ø  × ªát  × äát Õ v t Ö t â ß Ý› Ü Û Ú Ù à sß ˆ ß ” œ Ø  × t}â6t ß Þ si! [sent-139, score-0.05]

65 › ©á £ j» 7 A » e 0 d ' d xs‚ … w ƒ Ù ” œ Ø   × t © Ù ” œ Ø  ˆ¯ut × t Ö | µ ' xs‚ Ô Ó 0 Iv … w ƒ q (18) This follows from the critical condition . [sent-140, score-0.044]

66 For , two additional and stable fixed points emerge which are symmetric with respect to the origin. [sent-141, score-0.071]

67 After the convergence to a stable fixed point, for and for , the obtained magnetisation equal to is shown in dependence of in Fig. [sent-142, score-0.134]

68 The critical point is found at a temperature  ™“ ‘ å w µ ÔIÓv ™ v ã ¨v Ô Ó 0 Iv | µ ' xs‚ … w ƒ The result is compared to the result obtained on the basis of the original BeP equations (5) (grey stars in Fig 1a). [sent-144, score-0.154]

69 We see that the critical point is slightly lower in the original BeP case. [sent-145, score-0.044]

70 (9), for which the point given by the messages looses stability at the critical temperature 0 qq thq ‘  ‘  sxB†‘ ' è  R‹ q (19) 0 Ù Ù ” Ø ” Ø êé ç ç © êé e  Wpt ¨æ q ¢¸ Ô  pt Ô d » ' e … w ƒ Ù d x„–‚ ” Ø   ˆt êé Ô © d ‘ å „q For the value , this yields . [sent-147, score-0.432]

71 is in fact the critical temperature for Ising grids obtained in the Bethe-Peierls approximation (for , we get [5]). [sent-148, score-0.082]

72 In æ ï î ç å ¨„Uç í q Ô Ù ” Ø  Wt êé ë ì µ µ this way, we casually come across the deep relationship of BeP and Bethe-Peierls which has been established by the theorem stating that stable BeP fixed points are local minima of the Bethe free energy functional [1],[10]. [sent-149, score-0.048]

73 , small , the results are also identical in the case of the ferromagnetic couplings studied here, as . [sent-156, score-0.139]

74 It is only around the critical values, where the two results seem to differ. [sent-157, score-0.044]

75 A comparison of the results against the synaptic weight , however, shows an almost perfect agreement for all . [sent-158, score-0.167]

76 ‘  °ð“ | µ t © — “ | 5 Discussion and Outlook In this report, we outlined the general structural affinity between belief propagation on binary Markov random fields and continuous Hopfield networks. [sent-160, score-0.213]

77 According to this analogy, synaptic weights correspond to the pairwise dependencies in the MRF and the neuronal signal transduction corresponds to the message exchange. [sent-161, score-0.366]

78 In the limit of many synaptic connections per neuron, but comparatively small individual synaptic weights, the dynamics of the Hopfield network is an exact mirror of the BeP dynamics in its time-continuous form. [sent-162, score-0.46]

79 To achieve the agreement, the choice of initial messages needs to be confined. [sent-163, score-0.147]

80 From this we can conclude that Hopfield network attractors are also BeP attractors (whereas the opposite does not necessarily hold). [sent-164, score-0.147]

81 Unlike BeP, Hopfield networks are guaranteed to converge to a fixed point. [sent-165, score-0.083]

82 We may thus argue that Hopfield networks naturally implement useful message initialisations that prevent trapping into a limit cycle. [sent-166, score-0.335]

83 As a further benefit, the local magnetisations, as the result of the inference process, are just reflected in the asymptotic neural activity. [sent-167, score-0.071]

84 The binary basis of the implementation is not necessarily a drawback, but could simply reflect the fact that many decisions have a yes-or-no character. [sent-168, score-0.045]

85 The Hopfield network model is still a crude simplification of biological neural networks and the relevance of our results for such real-world structures remains somewhat open. [sent-170, score-0.214]

86 However, the search for a possible neural implementation of BeP is appealing and different concepts have already been outlined [11]. [sent-171, score-0.102]

87 This approach shares our guiding idea that the neural activity should directly be interpreted as a message passing process. [sent-172, score-0.347]

88 Whereas our approach is a mathematically rigorous intermediate step towards more realistic models, the approach chosen in [11] tries to directly implement BeP with spiking neurons. [sent-173, score-0.086]

89 In accordance with the guiding idea, our future work will comprise three major steps. [sent-174, score-0.026]

90 First, we take the step from Hopfield networks to networks with spiking elements. [sent-175, score-0.197]

91 Here, the question is to what extent can the concepts of message passing be adapted or reinterpreted so that a BeP implementation is possible. [sent-176, score-0.292]

92 Second, we will give up the artificial requirement of symmetric synaptic weights. [sent-177, score-0.19]

93 To do this, we might have to modify the original BeP concept, while we still may want to stick to the message passing idea. [sent-178, score-0.242]

94 After all, there is no obvious reason why the brain should implement exactly the BeP algorithm. [sent-179, score-0.056]

95 It rather seems plausible that the brain employs inference algorithms that might be conceptually close to BeP. [sent-180, score-0.062]

96 Furthermore, we need to explore how the underlying structure could actually be learnt by a neural system. [sent-182, score-0.032]

97 Message passing-based inference algorithms offer an attractive alternative to traditional notions of computation inspired by computer science, paving the way towards a more profound understanding of natural computation [12]. [sent-183, score-0.081]

98 To judge its eligibility, there is - ultimately - one question: How can the usefulness (or inappropriateness) of the message passing concept in connection with biological networks be verified or challenged experimentally? [sent-184, score-0.403]

99 (2005) On the properties of the Bethe approximation and loopy belief propagation on binary networks. [sent-200, score-0.22]

100 (2004) On the uniqueness of loopy belief propagation fixed points. [sent-241, score-0.197]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('hop', 0.59), ('bep', 0.555), ('mrf', 0.188), ('eld', 0.167), ('message', 0.157), ('messages', 0.147), ('synaptic', 0.145), ('ferromagnetic', 0.103), ('qq', 0.091), ('passing', 0.085), ('networks', 0.083), ('magnetisation', 0.082), ('neuron', 0.079), ('belief', 0.073), ('propagation', 0.072), ('network', 0.065), ('neurodynamics', 0.062), ('stoop', 0.062), ('thq', 0.062), ('ising', 0.061), ('initialisation', 0.054), ('lyapunov', 0.054), ('attractor', 0.054), ('loopy', 0.052), ('stability', 0.05), ('activity', 0.047), ('ha', 0.047), ('spin', 0.046), ('symmetric', 0.045), ('connection', 0.044), ('critical', 0.044), ('equations', 0.043), ('zurich', 0.043), ('jacobian', 0.043), ('translation', 0.043), ('attractors', 0.041), ('htq', 0.041), ('initialisations', 0.041), ('jit', 0.041), ('magnetisations', 0.041), ('realise', 0.041), ('reparametrisation', 0.041), ('ruedi', 0.041), ('weights', 0.039), ('inference', 0.039), ('establish', 0.038), ('temperature', 0.038), ('normalisation', 0.036), ('couplings', 0.036), ('boltzmann', 0.035), ('connections', 0.034), ('biological', 0.034), ('implement', 0.033), ('external', 0.033), ('geman', 0.033), ('bethe', 0.033), ('neural', 0.032), ('spiking', 0.031), ('neuroinformatics', 0.03), ('dl', 0.03), ('reformulated', 0.029), ('uv', 0.029), ('stars', 0.029), ('sketched', 0.029), ('concepts', 0.028), ('switzerland', 0.026), ('guiding', 0.026), ('grey', 0.026), ('whereas', 0.026), ('convergence', 0.026), ('stable', 0.026), ('continuous', 0.025), ('dynamics', 0.025), ('sa', 0.025), ('picture', 0.025), ('transduction', 0.025), ('analogy', 0.024), ('translated', 0.023), ('beliefs', 0.023), ('brain', 0.023), ('binary', 0.023), ('node', 0.023), ('connectivity', 0.023), ('marginals', 0.023), ('read', 0.023), ('implementation', 0.022), ('towards', 0.022), ('agreement', 0.022), ('relationship', 0.022), ('rules', 0.021), ('limit', 0.021), ('xed', 0.021), ('chapter', 0.021), ('markov', 0.021), ('hidden', 0.021), ('outlined', 0.02), ('inputs', 0.02), ('attractive', 0.02), ('according', 0.019), ('circles', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields

Author: Thomas Ott, Ruedi Stoop

Abstract: We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield network can be derived from the equations of belief propagation on a binary Markov random field. As Hopfield networks are equipped with a Lyapunov function, convergence is guaranteed. As a consequence, in the limit of many weak connections per neuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks.

2 0.11585026 57 nips-2006-Conditional mean field

Author: Peter Carbonetto, Nando D. Freitas

Abstract: Despite all the attention paid to variational methods based on sum-product message passing (loopy belief propagation, tree-reweighted sum-product), these methods are still bound to inference on a small set of probabilistic models. Mean field approximations have been applied to a broader set of problems, but the solutions are often poor. We propose a new class of conditionally-specified variational approximations based on mean field theory. While not usable on their own, combined with sequential Monte Carlo they produce guaranteed improvements over conventional mean field. Moreover, experiments on a well-studied problem— inferring the stable configurations of the Ising spin glass—show that the solutions can be significantly better than those obtained using sum-product-based methods. 1

3 0.11418825 201 nips-2006-Using Combinatorial Optimization within Max-Product Belief Propagation

Author: Daniel Tarlow, Gal Elidan, Daphne Koller, John C. Duchi

Abstract: In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random field (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very efficiently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials. In this paper, we present a new method, called C OMPOSE, for exploiting combinatorial optimization for sub-networks within the context of a max-product belief propagation algorithm. C OMPOSE uses combinatorial optimization for computing exact maxmarginals for an entire sub-network; these can then be used for inference in the context of the network as a whole. We describe highly efficient methods for computing max-marginals for subnetworks corresponding both to bipartite matchings and to regular networks. We present results on both synthetic and real networks encoding correspondence problems between images, which involve both matching constraints and pairwise geometric constraints. We compare to a range of current methods, showing that the ability of C OMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments.

4 0.10051196 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons

Author: Thomas Voegtlin

Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1

5 0.088140741 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization

Author: Su-in Lee, Varun Ganapathi, Daphne Koller

Abstract: Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data. Our method is based on the use of L1 regularization on the weights of the log-linear model, which has the effect of biasing the model towards solutions where many of the parameters are zero. This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. Thus, we explore the use of different feature introduction schemes and compare their performance. We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. We show that our L1 -based method achieves considerably higher generalization performance than the more standard L2 -based method (a Gaussian parameter prior) or pure maximum-likelihood learning. We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem.

6 0.086621813 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall

7 0.081749342 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

8 0.075736605 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

9 0.07541883 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing

10 0.07515014 69 nips-2006-Distributed Inference in Dynamical Systems

11 0.075106159 35 nips-2006-Approximate inference using planar graph decomposition

12 0.064046957 41 nips-2006-Bayesian Ensemble Learning

13 0.060724303 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

14 0.057595327 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

15 0.056734975 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

16 0.056490622 113 nips-2006-Learning Structural Equation Models for fMRI

17 0.050509576 154 nips-2006-Optimal Change-Detection and Spiking Neurons

18 0.047454439 43 nips-2006-Bayesian Model Scoring in Markov Random Fields

19 0.040581379 181 nips-2006-Stability of $K$-Means Clustering

20 0.039618168 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.129), (1, -0.12), (2, 0.03), (3, -0.017), (4, 0.086), (5, 0.015), (6, 0.08), (7, 0.056), (8, -0.066), (9, -0.074), (10, 0.016), (11, -0.096), (12, -0.009), (13, 0.041), (14, 0.022), (15, -0.01), (16, -0.071), (17, 0.123), (18, 0.06), (19, 0.149), (20, 0.023), (21, 0.07), (22, -0.027), (23, 0.051), (24, -0.017), (25, 0.024), (26, 0.096), (27, 0.037), (28, -0.069), (29, -0.041), (30, -0.066), (31, -0.072), (32, -0.07), (33, -0.027), (34, 0.003), (35, -0.032), (36, 0.001), (37, -0.141), (38, 0.039), (39, 0.027), (40, -0.099), (41, 0.077), (42, 0.097), (43, 0.034), (44, 0.033), (45, -0.007), (46, 0.077), (47, 0.135), (48, 0.049), (49, 0.036)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95851105 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields

Author: Thomas Ott, Ruedi Stoop

Abstract: We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield network can be derived from the equations of belief propagation on a binary Markov random field. As Hopfield networks are equipped with a Lyapunov function, convergence is guaranteed. As a consequence, in the limit of many weak connections per neuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks.

2 0.6369552 201 nips-2006-Using Combinatorial Optimization within Max-Product Belief Propagation

Author: Daniel Tarlow, Gal Elidan, Daphne Koller, John C. Duchi

Abstract: In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random field (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very efficiently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials. In this paper, we present a new method, called C OMPOSE, for exploiting combinatorial optimization for sub-networks within the context of a max-product belief propagation algorithm. C OMPOSE uses combinatorial optimization for computing exact maxmarginals for an entire sub-network; these can then be used for inference in the context of the network as a whole. We describe highly efficient methods for computing max-marginals for subnetworks corresponding both to bipartite matchings and to regular networks. We present results on both synthetic and real networks encoding correspondence problems between images, which involve both matching constraints and pairwise geometric constraints. We compare to a range of current methods, showing that the ability of C OMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments.

3 0.54937381 57 nips-2006-Conditional mean field

Author: Peter Carbonetto, Nando D. Freitas

Abstract: Despite all the attention paid to variational methods based on sum-product message passing (loopy belief propagation, tree-reweighted sum-product), these methods are still bound to inference on a small set of probabilistic models. Mean field approximations have been applied to a broader set of problems, but the solutions are often poor. We propose a new class of conditionally-specified variational approximations based on mean field theory. While not usable on their own, combined with sequential Monte Carlo they produce guaranteed improvements over conventional mean field. Moreover, experiments on a well-studied problem— inferring the stable configurations of the Ising spin glass—show that the solutions can be significantly better than those obtained using sum-product-based methods. 1

4 0.49527895 35 nips-2006-Approximate inference using planar graph decomposition

Author: Amir Globerson, Tommi S. Jaakkola

Abstract: A number of exact and approximate methods are available for inference calculations in graphical models. Many recent approximate methods for graphs with cycles are based on tractable algorithms for tree structured graphs. Here we base the approximation on a different tractable model, planar graphs with binary variables and pure interaction potentials (no external field). The partition function for such models can be calculated exactly using an algorithm introduced by Fisher and Kasteleyn in the 1960s. We show how such tractable planar models can be used in a decomposition to derive upper bounds on the partition function of non-planar models. The resulting algorithm also allows for the estimation of marginals. We compare our planar decomposition to the tree decomposition method of Wainwright et. al., showing that it results in a much tighter bound on the partition function, improved pairwise marginals, and comparable singleton marginals. Graphical models are a powerful tool for modeling multivariate distributions, and have been successfully applied in various fields such as coding theory and image processing. Applications of graphical models typically involve calculating two types of quantities, namely marginal distributions, and MAP assignments. The evaluation of the model partition function is closely related to calculating marginals [12]. These three problems can rarely be solved exactly in polynomial time, and are provably computationally hard in the general case [1]. When the model conforms to a tree structure, however, all these problems can be solved in polynomial time. This has prompted extensive research into tree based methods. For example, the junction tree method [6] converts a graphical model into a tree by clustering nodes into cliques, such that the graph over cliques is a tree. The resulting maximal clique size (cf. tree width) may nevertheless be prohibitively large. Wainwright et. al. [9, 11] proposed an approximate method based on trees known as tree reweighting (TRW). The TRW approach decomposes the potential vector of a graphical model into a mixture over spanning trees of the model, and then uses convexity arguments to bound various quantities, such as the partition function. One key advantage of this approach is that it provides bounds on partition function value, a property which is not shared by approximations based on Bethe free energies [13]. In this paper we focus on a different class of tractable models: planar graphs. A graph is called planar if it can be drawn in the plane without crossing edges. Works in the 1960s by physicists Fisher [5] and Kasteleyn [7], among others, have shown that the partition function for planar graphs may be calculated in polynomial time. This, however, is true under two key restrictions. One is that the variables xi are binary. The other is that the interaction potential depends only on xi xj (where xi ∈ {±1}), and not on their individual values (i.e., the zero external field case). Here we show how the above method can be used to obtain upper bounds on the partition function for non-planar graphs. As in TRW, we decompose the potential of a non-planar graph into a sum over spanning planar models, and then use a convexity argument to obtain an upper bound on the log partition function. The bound optimization is a convex problem, and can be solved in polynomial time. We compare our method with TRW on a planar graph with an external field, and show that it performs favorably with respect to both pairwise marginals and the bound on the partition function, and the two methods give similar results for singleton marginals. 1 Definitions and Notations Given a graph G with n vertices and a set of edges E, we are interested in pairwise Markov Random Fields (MRF) over the graph G. A pairwise MRF [13] is a multivariate distribution over variables x = {x1 , . . . , xn } defined as 1 P p(x) = e ij∈E fij (xi ,xj ) (1) Z where fij are a set of |E| functions, or interaction potentials, defined over pairs of variables. The P partition function is defined as Z = x e ij∈E fij (xi ,xj ) . Here we will focus on the case where xi ∈ {±1}. Furthermore, we will be interested in interaction potentials which only depend on agreement or disagreement between the signs of their variables. We define those by 1 θij (1 + xi xj ) = θij I(xi = xj ) (2) 2 so that fij (xi , xj ) is zero if xi = xj and θij if xi = xj . The model is then defined via the set of parameters θij . We use θ to denote the vector of parameters θij , and denote the partition function by Z(θ) to highlight its dependence on these parameters. f (xi , xj ) = A graph G is defined as planar if it can be drawn in the plane without any intersection of edges [4]. With some abuse of notation, we define E as the set of line segments in 2 corresponding to the edges in the graph. The regions of 2 \ E are defined as the faces of the graph. The face which corresponds to an unbounded region is called the external face. Given a planar graph G, its dual graph G∗ is defined in the following way: the vertices of G∗ correspond to faces of G, and there is an edge between two vertices in G∗ iff the two corresponding faces in G share an edge. If the graph G is weighted, the weight on an edge in G∗ is the weight on the edge shared by the corresponding faces in G. A plane triangulation of a planar graph G is obtained from G by adding edges such that all the faces of the resulting graph have exactly three vertices. Thus a plane triangulated graph has a dual where all vertices have degree three. It can be shown that every plane graph can be plane triangulated [4]. We shall also need the notion of a perfect matching on a graph. A perfect matching on a graph G is defined as a set of edges H ⊆ E such that every vertex in G has exactly one edge in H incident on it. If the graph is weighted, the weight of the matching is defined as the product of the weights of the edges in the matching. Finally, we recall the definition of a marginal polytope of a graph [12]. Consider an MRF over a graph G where fij are given by Equation 2. Denote the probability of the event I(xi = xj ) under p(x) by τij . The marginal polytope of G, denoted by M(G), is defined as the set of values τij that can be obtained under some assignment to the parameters θij . For a general graph G the polytope M(G) cannot be described using a polynomial number of inequalities. However, for planar graphs, it turns out that a set of O(n3 ) constraints, commonly referred to as triangle inequalities, suffice to describe M(G) (see [3] page 434). The triangle inequalities are defined by 1 TRI(n) = {τij : τij + τjk − τik ≤ 1, τij + τjk + τik ≥ 1, ∀i, j, k ∈ {1, . . . , n}} (3) Note that the above inequalities actually contain variables τij which do not correspond to edges in the original graph G. Thus the equality M(G) = TRI(n) should be understood as referring only to the values of τij that correspond to edges in the graph. Importantly, the values of τij for edges not in the graph need not be valid marginals for any MRF. In other words M(G) is a projection of TRI(n) on the set of edges of G. It is well known that the marginal polytope for trees is described via pairwise constraints. It is thus interesting that for planar graphs, it is triplets, rather than pairwise 1 The definition here is slightly different from that in [3], since here we refer to agreement probabilities, whereas [3] refers to disagreement probabilities. This polytope is also referred to as the cut polytope. constraints, that characterize the polytope. In this sense, planar graphs and trees may be viewed as a hierarchy of polytope complexity classes. It remains an interesting problem to characterize other structures in this hierarchy and their related inference algorithms. 2 Exact calculation of partition function using perfect matching The seminal works of Kasteleyn [7] and Fisher [5] have shown how one can calculate the partition function for a binary MRF over a planar graph with pure interaction potentials. We briefly review Fisher’s construction, which we will use in what follows. Our interpretation of the method differs somewhat from that of Fisher, but we believe it is more straightforward. The key idea in calculating the partition function is to convert the summation over values of x to the problem of calculating the sum of weights of all perfect matchings in a graph constructed from G, as shown below. In this section, we consider weighted graphs (graphs with numbers assigned to their edges). For the graph G associated with the pairwise MRF, we assign weights wij = e2θij to the edges. The first step in the construction is to plane triangulate the graph G. Let us call the resulting graph GT . We define an MRF on GT by assigning a parameter θij = 0 to the edges that have been added to G, and the corresponding weight wij = 1. Thus GT essentially describes the same distribution as G, and therefore has the same partition function. We can thus restrict our attention to calculating the partition function for the MRF on GT . As a first step in calculating a partition function over GT , we introduce the following definition: a ˆ set of edges E in GT is an agreement edge set (or AES) if for every triangle face F in GT one of the ˆ ˆ following holds: The edges in F are all in E, or exactly one of the edges in F is in E. The weight ˆ is defined as the product of the weights of the edges in E. ˆ of a set E It can be shown that there exists a bijection between pairs of assignments {x, −x} and agreement edge sets. The mapping from x to an edge set is simply the set of edges such that xi = xj . It is easy to see that this is an agreement edge set. The reverse mapping is obtained by finding an assignment x such that xi = xj iff the corresponding edge is in the agreement edge set. The existence of this mapping can be shown by induction on the number of (triangle) faces. P The contribution of a given assignment x to the partition function is e ˆ sponds to an AES denoted by E it is easy to see that P e ij∈E θij I(xi =xj ) = e− P ij∈E θij P e ˆ ij∈E 2θij = ce P ˆ ij∈E ij∈E 2θij θij I(xi =xj ) =c wij . If x corre(4) ˆ ij∈E P where c = e− ij∈E θij . Define the superset Λ as the set of agreement edge sets. The above then implies that Z(θ) = 2c E∈Λ ij∈E wij , and is thus proportional to the sum of AES weights. ˆ ˆ To sum over agreement edge sets, we use the following elegant trick introduced by Fisher [5]. Construct a new graph GPM from the dual of GT by introducing new vertices and edges according to the following rule: Replace each original vertex with three vertices that are connected to each other, and assign a weight of one to the new edges. Next, consider the three neighbors of the original vertex 2 . Connect each of the three new vertices to one of these three neighbors, keeping the original weights on these edges. The transformation is illustrated in Figure 1. The new graph GPM has O(3n) vertices, and is also planar. It can be seen that there is a one to one correspondence between perfect matchings in GPM and agreement edge sets in GT . Define Ω to be the set of perfect matchings in GPM . Then Z(θ) = 2c M ∈Ω ij∈M wij where we have used the fact that all the new weights have a value of one. Thus, the partition function is a sum over the weights of perfect matchings in GPM . Finally, we need a way of summing over the weights of the set of perfect matchings in a graph. Kasteleyn [7] proved that for a planar graph GPM , this sum may be obtained using the following sequence of steps: • Direct the edges of the graph GPM such that for every face (except possibly the external face), the number of edges on its perimeter oriented in a clockwise manner is odd. Kasteleyn showed that such a so called Pfaffian orientation may be constructed in polynomial time for a planar graph (see also [8] page 322). 2 Note that in the dual of GT all vertices have degree three, since GT is plane triangulated. 1.2 0.7 0.6 1 1 1 0.8 0.6 0.8 1.5 1.4 1.5 1 1 1.2 1 1 1 1 0.7 1.4 1 1 1 Figure 1: Illustration of the graph transformations in Section 2 for a complete graph with four vertices. Left panel shows the original weighted graph (dotted edges and grey vertices) and its dual (solid edges and black vertices). Right panel shows the dual graph with each vertex replaced by a triangle (the graph GPM in the text). Weights for dual graph edges correspond to the weights on the original graph. • Define the matrix P (GPM ) to be a skew symmetric matrix such that Pij = 0 if ij is not an edge, Pij = wij if the arrow on edge ij runs from i to j and Pij = −wij otherwise. • The sum over weighted matchings can then be shown to equal |P (GPM )|. The partition function is thus given by Z(θ) = 2c |P (GPM )|. To conclude this section we reiterate the following two key points: the partition function of a binary MRF over a planar graph with interaction potentials as in Equation 2 may be calculated in polynomial time by calculating the determinant of a matrix of size O(3n). An important outcome of this result is that the functional relation between Z(θ) and the parameters θij is known, a fact we shall use in what follows. 3 Partition function bounds via planar decomposition Given a non-planar graph G over binary variables with a vector of interaction potentials θ, we wish to use the exact planar computation to obtain a bound on the partition function of the MRF on G. We assume for simplicity that the potentials on the MRF for G are given in the form of Equation 2. Thus, G violates the assumptions of the previous section only in its non-planarity. Define G(r) as a set of spanning planar subgraphs of G, i.e., each graph G(r) is planar and contains all the vertices of G and some its edges. Denote by m the number of such graphs. Introduce the following definitions: (r) • θ (r) is a set of parameters on the edges of G(r) , and θij is an element in this set. Z(θ (r) ) is the partition function of the MRF on G(r) with parameters θ (r) . ˆ (r) ˆ(r) • θ is a set of parameters on the edges of G such that if edge (ij) is in G(r) then θij = (r) ˆ(r) θ , and otherwise θ = 0. ij ij Given a distribution ρ(r) on the graphs G(r) (i.e., ρ(r) ≥ 0 for r = 1, . . . , m and assume that the parameters for G(r) are such that ˆ ρ(r)θ θ= (r) r ρ(r) = 1), (5) r Then, by the convexity of the log partition function, as a function of the model parameters, we have ρ(r) log Z(θ (r) ) ≡ f (θ, ρ, θ (r) ) log Z(θ) ≤ (6) r Since by assumption the graphs G(r) are planar, this bound can be calculated in polynomial time. Since this bound is true for any set of parameters θ (r) which satisfies the condition in Equation 5 and for any distribution ρ(r), we may optimize over these two variables to obtain the tightest bound possible. Define the optimal bound for a fixed value of ρ(r) by g(ρ, θ) (optimization is w.r.t. θ (r) ) g(ρ, θ) = f (θ, ρ, θ (r) ) min θ (r) : P ˆ ρ(r)θ (r) =θ (7) Also, define the optimum of the above w.r.t. ρ by h(θ). h(θ) = min g(θ, ρ) ρ(r) ≥ 0, ρ(r) = 1 (8) Thus, h(θ) is the optimal upper bound for the given parameter vector θ. In the following section we argue that we can in fact find the global optimum of the above problem. 4 Globally Optimal Bound Optimization First consider calculating g(ρ, θ) from Equation 7. Note that since log Z(θ (r) ) is a convex function of θ (r) , and the constraints are linear, the overall optimization is convex and can be solved efficiently. In the current implementation, we use a projected gradient algorithm [2]. The gradient of f (θ, ρ, θ (r) ) w.r.t. θ (r) is given by ∂f (θ, ρ, θ (r) ) (r) ∂θij (r) = ρ(r) 1 + eθij (r) P −1 (GPM ) (r) k(i,j) Sign(Pk(i,j) (GPM )) (9) where k(i, j) returns the row and column indices of the element in the upper triangular matrix of (r) (r) P (GPM ), which contains the element e2θij . Since the optimization in Equation 7 is convex, it has an equivalent convex dual. Although we do not use this dual for optimization (because of the difficulty of expressing the entropy of planar models solely in terms of triplet marginals), it nevertheless allows some insight into the structure of the problem. The dual in this case is closely linked to the notion of the marginal polytope defined in Section 1. Using a derivation similar to [11], we arrive at the following characterization of the dual g(ρ, θ) = max τ ∈TRI(n) ρ(r)H(θ (r) (τ )) θ·τ + (10) r where θ (r) (τ ) denotes the parameters of an MRF on G(r) such that its marginals are given by the restriction of τ to the edges of G(r) , and H(θ (r) (τ )) denotes the entropy of the MRF over G(r) with parameters θ (r) (τ ). The maximized function in Equation 10 is linear in ρ and thus g(ρ, θ) is a pointwise maximum over (linear) convex functions in ρ and is thus convex in ρ. It therefore has no (r) local minima. Denote by θmin (ρ) the set of parameters that minimizes Equation 7 for a given value of ρ. Using a derivation similar to that in [11], the gradient of g(ρ, θ) can be shown to be ∂g(ρ, θ) (r) = H(θmin (ρ)) ∂ρ(r) (11) Since the partition function for G(r) can be calculated efficiently, so can the entropy. We can now summarize the algorithm for calculating h(θ) • Initialize ρ0 . Iterate: – For ρt , find θ (r) which solves the minimization in Equation 7. – Calculate the gradient of g(ρ, θ) at ρt using the expression in Equation 11 – Update ρt+1 = ρt + αv where v is a feasible search direction calculated from the gradient of g(ρ, θ) and the simplex constraints on ρ. The step size α is calculated via an Armijo line search. – Halt when the change in g(ρ, θ) is smaller than some threshold. Note that the minimization w.r.t. θ (r) is not very time consuming since we can initialize it with the minimum from the previous step, and thus only a few iterations are needed to find the new optimum, provided the change in ρ is not too big. The above algorithm is guaranteed to converge to a global optimum of ρ [2], and thus we obtain the tightest possible upper bound on Z(θ) given our planar graph decomposition. The procedure described here is asymmetric w.r.t. ρ and θ (r) . In a symmetric formulation the minimizing gradient steps could be carried out jointly or in an alternating sequence. The symmetric ˆ (r) formulation can be obtained by decoupling ρ and θ (r) in the bi-linear constraint ρ(r)θ = θ. Field Figure 2: Illustration of planar subgraph construction for a rectangular lattice with external field. Original graph is shown on the left. The field vertex is connected to all vertices (edges not shown). The graph on the right results from isolating the 4th ,5th columns of the original graph (shown in grey), and connecting the field vertex to the external vertices of the three disconnected components. Note that the resulting graph is planar. ˜ ˜ Specifically, we introduce θ (r) = θ (r) ρ(r) and perform the optimization w.r.t. ρ and θ (r) . It can be ˜(r) ) with the relevant (de-coupled) constraint is equivalent shown that a stationary point of f (θ, ρ, θ to the procedure described above. The advantage of this approach is that the exact minimization w.r.t θ (r) is not required before modifying ρ. Our experiments have shown, however, that the methods take comparable times to converge, although this may be a property of the implementation. 5 Estimating Marginals The optimization problem as defined above minimizes an upper bound on the partition function. However, it may also be of interest to obtain estimates of the marginals of the MRF over G. To obtain marginal estimates, we follow the approach in [11]. We first characterize the optimum of Equation 7 for a fixed value of ρ. Deriving the Lagrangian of Equation 7 w.r.t. θ (r) we obtain the (r) following characterization of θmin (ρ): Marginal Optimality Criterion: For any two graphs G(r) , G(s) such that the edge (ij) is in both (r) (s) graphs, the optimal parameter vector satisfies τij (θmin (ρ)) = τij (θmin (ρ)). Thus, the optimal set of parameters for the graphs G(r) is such that every two graphs agree on the marginals of all the edges they share. This implies that at the optimum, there is a well defined set of marginals over all the edges. We use this set as an approximation to the true marginals. A different method for estimating marginals uses the partition function bound directly. We first P calculate partition function bounds on the sums: αi (1) = x:xi =1 e ij∈E fij (xi ,xj ) and αi (−1) = P αi (1) e ij∈E fij (xi ,xj ) and then normalize αi (1)+αi (−1) to obtain an estimate for p(xi = 1). This method has the advantage of being more numerically stable (since it does not depend on derivatives of log Z). However, it needs to be calculated separately for each variable, so that it may be time consuming if one is interested in marginals for a large set of variables. x:xi =−1 6 Experimental Evaluation We study the application of our Planar Decomposition (PDC) P method to a binary MRF on a square P lattice with an external field. The MRF is given by p(x) ∝ e ij∈E θij xi xj + i∈V θi xi where V are the lattice vertices, and θi and θij are parameters. Note that this interaction does not satisfy the conditions for exact calculation of the partition function, even though the graph is planar. This problem is in fact NP hard [1]. However, it is possible to obtain the desired interaction form by introducing an additional variable xn+1 that is connected to all the original variables.P Denote the correspondP ij∈E θij xi xj + i∈V θi,n+1 xi xn+1 , where ing graph by Gf . Consider the distribution p(x, xn+1 ) ∝ e θi,n+1 = θi . It is easy to see that any property of p(x) (e.g., partition function, marginals) may be calculated from the corresponding property of p(x, xn+1 ). The advantage of the latter distribution is that it has the desired interaction form. We can thus apply PDC by choosing planar subgraphs of the non-planar graph Gf . 0.25 0.15 0.1 0.05 0.5 1 1.5 Interaction Strength 0.03 Singleton Marginal Error Z Bound Error Pairwise Marginals Error 0.08 PDC TRW 0.2 0.07 0.06 0.05 0.04 0.03 0.02 2 0.5 1 1.5 Interaction Strength 0.025 0.02 0.015 0.01 0.005 2 0.5 1 1.5 Interaction Strength 2 !3 x 10 0.025 0.02 0.015 0.5 1 Field Strength 1.5 2 Singleton Marginal Error Pairwise Marginals Error Z Bound Error 0.03 0.03 0.025 0.02 0.015 0.5 1 Field Strength 1.5 2 9 8 7 6 5 4 3 0.5 1 Field Strength 1.5 2 Figure 3: Comparison of the TRW and Planar Decomposition (PDC) algorithms on a 7×7 square lattice. TRW results shown in red squares, and PDC in blue circles. Left column shows the error in the log partition bound. Middle column is the mean error for pairwise marginals, and right column is the error for the singleton marginal of the variable at the lattice center. Results in upper row are for field parameters drawn from U[−0.05, 0.05] and various interaction parameters. Results in the lower row are for interaction parameters drawn from U [−0.5, 0.5] and various field parameters. Error bars are standard errors calculated from 40 random trials. There are clearly many ways to choose spanning planar subgraphs of Gf . Spanning subtrees are one option, and were used in [11]. Since our optimization is polynomial in the number of subgraphs, √ we preferred to use a number of subgraphs that is linear in n. The key idea in generating these planar subgraphs is to generate disconnected components of the lattice and connect xn+1 only to the external vertices of these components. Here we generate three disconnected components by isolating two neighboring columns (or rows) from the rest of the graph, resulting in three components. This is √ illustrated in Figure 2. To this set of 2 n graphs, we add the independent variables graph consisting only of edges from the field node to all the other nodes. We compared the performance of the PDC and TRW methods 3 4 on a 7 × 7 lattice . Since the exact partition function and marginals can be calculated for this case, we could compare both algorithms to the true values. The MRF parameters were set according to the two following scenarios: 1) Varying Interaction - The field parameters θi were drawn uniformly from U[−0.05, 0.05], and the interaction θij from U[−α, α] where α ∈ {0.2, 0.4, . . . , 2}. This is the setting tested in [11]. 2) Varying Field θi was drawn uniformly from U[−α, α], where α ∈ {0.2, 0.4, . . . , 2} and θij from U[−0.5, 0.5]. For each scenario, we calculated the following measures: 1) Normalized log partition error 1 1 alg − log Z true ). 2) Error in pairwise marginals |E| ij∈E |palg (xi = 1, xj = 1) − 49 (log Z ptrue (xi = 1, xj = 1)|. Pairwise marginals were calculated jointly using the marginal optimality criterion of Section 5. 3) Error in singleton marginals. We calculated the singleton marginals for the innermost node in the lattice (i.e., coordinate [3, 3]), which intuitively should be the most difficult for the planar based algorithm. This marginal was calculated using two partition functions, as explained in Section 5 5 . The same method was used for TRW. The reported error measure is |palg (xi = 1) − ptrue (xi = 1)|. Results were averaged over 40 random trials. Results for the two scenarios and different evaluation measures are given in Figure 3. It can be seen that the partition function bound for PDC is significantly better than TRW for almost all parameter settings, although the difference becomes smaller for large field values. Error for the PDC pairwise 3 TRW and PDC bounds were optimized over both the subgraph parameters and the mixture parameters ρ. In terms of running time, PDC optimization for a fixed value of ρ took about 30 seconds, which is still slower than the TRW message passing implementation. 5 Results using the marginal optimality criterion were worse for PDC, possibly due to its reduced numerical precision. 4 marginals are smaller than those of TRW for all parameter settings. For the singleton parameters, TRW slightly outperforms PDC. This is not surprising since the field is modeled by every spanning tree in the TRW decomposition, whereas in PDC not all the structures model a given field. 7 Discussion We have presented a method for using planar graphs as the basis for approximating non-planar graphs such as planar graphs with external fields. While the restriction to binary variables limits the applicability of our approach, it remains relevant in many important applications, such as coding theory and combinatorial optimization. Moreover, it is always possible to convert a non-binary graphical model to a binary one by introducing additional variables. The resulting graph will typically not be planar, even when the original graph over k−ary variables is. However, the planar decomposition method can then be applied to this non-planar graph. The optimization of the decomposition is carried out explicitly over the planar subgraphs, thus limiting the number of subgraphs that can be used in the approximation. In the TRW method this problem is circumvented since it is possible to implicitly optimize over all spanning trees. The reason this can be done for trees is that the entropy of an MRF over a tree may be written as a function of its marginal variables. We do not know of an equivalent result for planar graphs, and it remains a challenge to find one. It is however possible to combine the planar and tree decompositions into one single bound, which is guaranteed to outperform the tree or planar approximations alone. The planar decomposition idea may in principle be applied to bounding the value of the MAP assignment. However, as in TRW, it can be shown that the solution is not dependent on the decomposition (as long as each edge appears in some structure), and the problem is equivalent to maximizing a linear function over the marginal polytope (which can be done in polynomial time for planar graphs). However, such a decomposition may suggest new message passing algorithms, as in [10]. Acknowledgments The authors acknowledge support from the Defense Advanced Research Projects Agency (Transfer Learning program). Amir Globerson is also supported by the Rothschild Yad-Hanadiv fellowship. The authors also wish to thank Martin Wainwright for providing his TRW code. References [1] F. Barahona. On the computational complexity of ising spin glass models. J. Phys. A., 15(10):3241–3253, 1982. [2] D. P. Bertsekas, editor. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995. [3] M.M. Deza and M. Laurent. Geometry of Cuts and Metrics. Springe-Verlag, 1997. [4] R. Diestel. Graph Theory. Springer-Verlag, 1997. [5] M.E. Fisher. On the dimer solution of planar ising models. J. Math. Phys., 7:1776–1781, 1966. [6] M.I. Jordan, editor. Learning in graphical models. MIT press, Cambridge, MA, 1998. [7] P.W. Kasteleyn. Dimer statistics and phase transitions. Journal of Math. Physics, 4:287–293, 1963. [8] L. Lovasz and M.D. Plummer. Matching Theory, volume 29 of Annals of discrete mathematics. NorthHolland, New-York, 1986. [9] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Tree-based reparameterization framework for analysis of sum-product and related algorithms. IEEE Trans. on Information Theory, 49(5):1120–1146, 2003. [10] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Map estimation via agreement on trees: messagepassing and linear programming. IEEE Trans. on Information Theory, 51(11):1120–1146, 2005. [11] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. IEEE Trans. on Information Theory, 51(7):2313–2335, 2005. [12] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Technical report, UC Berkeley Dept. of Statistics, 2003. [13] J.S. Yedidia, W.T. W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. on Information Theory, 51(7):2282–2312, 2005.

5 0.49356553 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing

Author: Yuanhao Chen, Long Zhu, Alan L. Yuille

Abstract: We describe an unsupervised method for learning a probabilistic grammar of an object from a set of training examples. Our approach is invariant to the scale and rotation of the objects. We illustrate our approach using thirteen objects from the Caltech 101 database. In addition, we learn the model of a hybrid object class where we do not know the specific object or its position, scale or pose. This is illustrated by learning a hybrid class consisting of faces, motorbikes, and airplanes. The individual objects can be recovered as different aspects of the grammar for the object class. In all cases, we validate our results by learning the probability grammars from training datasets and evaluating them on the test datasets. We compare our method to alternative approaches. The advantages of our approach is the speed of inference (under one second), the parsing of the object, and increased accuracy of performance. Moreover, our approach is very general and can be applied to a large range of objects and structures. 1

6 0.48974797 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization

7 0.45930937 69 nips-2006-Distributed Inference in Dynamical Systems

8 0.44376972 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall

9 0.42117807 98 nips-2006-Inferring Network Structure from Co-Occurrences

10 0.41555321 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

11 0.41220769 43 nips-2006-Bayesian Model Scoring in Markov Random Fields

12 0.39598694 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

13 0.34024557 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

14 0.33867177 182 nips-2006-Statistical Modeling of Images with Fields of Gaussian Scale Mixtures

15 0.33852798 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons

16 0.31284639 144 nips-2006-Near-Uniform Sampling of Combinatorial Spaces Using XOR Constraints

17 0.31261057 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation

18 0.30893004 139 nips-2006-Multi-dynamic Bayesian Networks

19 0.30456945 23 nips-2006-Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models

20 0.28628501 90 nips-2006-Hidden Markov Dirichlet Process: Modeling Genetic Recombination in Open Ancestral Space


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.05), (3, 0.017), (7, 0.366), (9, 0.031), (22, 0.052), (44, 0.123), (57, 0.06), (65, 0.034), (66, 0.017), (69, 0.042), (71, 0.053), (90, 0.014), (93, 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.97307479 151 nips-2006-On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts

Author: Hariharan Narayanan, Mikhail Belkin, Partha Niyogi

Abstract: One of the intuitions underlying many graph-based methods for clustering and semi-supervised learning, is that class or cluster boundaries pass through areas of low probability density. In this paper we provide some formal analysis of that notion for a probability distribution. We introduce a notion of weighted boundary volume, which measures the length of the class/cluster boundary weighted by the density of the underlying probability distribution. We show that sizes of the cuts of certain commonly used data adjacency graphs converge to this continuous weighted volume of the boundary. keywords: Clustering, Semi-Supervised Learning 1

same-paper 2 0.95557052 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields

Author: Thomas Ott, Ruedi Stoop

Abstract: We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield network can be derived from the equations of belief propagation on a binary Markov random field. As Hopfield networks are equipped with a Lyapunov function, convergence is guaranteed. As a consequence, in the limit of many weak connections per neuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks.

3 0.94409877 129 nips-2006-Map-Reduce for Machine Learning on Multicore

Author: Cheng-tao Chu, Sang K. Kim, Yi-an Lin, Yuanyuan Yu, Gary Bradski, Kunle Olukotun, Andrew Y. Ng

Abstract: We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors. 1

4 0.94224513 199 nips-2006-Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing

Author: Yuanhao Chen, Long Zhu, Alan L. Yuille

Abstract: We describe an unsupervised method for learning a probabilistic grammar of an object from a set of training examples. Our approach is invariant to the scale and rotation of the objects. We illustrate our approach using thirteen objects from the Caltech 101 database. In addition, we learn the model of a hybrid object class where we do not know the specific object or its position, scale or pose. This is illustrated by learning a hybrid class consisting of faces, motorbikes, and airplanes. The individual objects can be recovered as different aspects of the grammar for the object class. In all cases, we validate our results by learning the probability grammars from training datasets and evaluating them on the test datasets. We compare our method to alternative approaches. The advantages of our approach is the speed of inference (under one second), the parsing of the object, and increased accuracy of performance. Moreover, our approach is very general and can be applied to a large range of objects and structures. 1

5 0.90719509 28 nips-2006-An Efficient Method for Gradient-Based Adaptation of Hyperparameters in SVM Models

Author: S. S. Keerthi, Vikas Sindhwani, Olivier Chapelle

Abstract: We consider the task of tuning hyperparameters in SVM models based on minimizing a smooth performance validation function, e.g., smoothed k-fold crossvalidation error, using non-linear optimization techniques. The key computation in this approach is that of the gradient of the validation function with respect to hyperparameters. We show that for large-scale problems involving a wide choice of kernel-based models and validation functions, this computation can be very efficiently done; often within just a fraction of the training time. Empirical results show that a near-optimal set of hyperparameters can be identified by our approach with very few training rounds and gradient computations. . 1

6 0.89695901 60 nips-2006-Convergence of Laplacian Eigenmaps

7 0.82775372 80 nips-2006-Fundamental Limitations of Spectral Clustering

8 0.76006198 43 nips-2006-Bayesian Model Scoring in Markov Random Fields

9 0.75007474 128 nips-2006-Manifold Denoising

10 0.74931371 171 nips-2006-Sample Complexity of Policy Search with Known Dynamics

11 0.74605018 119 nips-2006-Learning to Rank with Nonsmooth Cost Functions

12 0.74479204 33 nips-2006-Analysis of Representations for Domain Adaptation

13 0.74416125 181 nips-2006-Stability of $K$-Means Clustering

14 0.73150915 121 nips-2006-Learning to be Bayesian without Supervision

15 0.72804725 109 nips-2006-Learnability and the doubling dimension

16 0.72317684 37 nips-2006-Attribute-efficient learning of decision lists and linear threshold functions under unconcentrated distributions

17 0.72263485 31 nips-2006-Analysis of Contour Motions

18 0.72182548 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

19 0.71475744 56 nips-2006-Conditional Random Sampling: A Sketch-based Sampling Technique for Sparse Data

20 0.71440035 174 nips-2006-Similarity by Composition