nips nips2000 nips2000-47 knowledge-graph by maker-knowledge-mining

47 nips-2000-Error-correcting Codes on a Bethe-like Lattice


Source: pdf

Author: Renato Vicente, David Saad, Yoshiyuki Kabashima

Abstract: We analyze Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as an extremization of a proper free-energy. We find a thermodynamic phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 jp Abstract We analyze Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. [sent-6, score-0.28]

2 The method naturally recovers the probability propagation decoding algorithm as an extremization of a proper free-energy. [sent-7, score-0.481]

3 We find a thermodynamic phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape. [sent-8, score-0.405]

4 Error-correcting codes (ECC) are particularly interesting examples of inference problems in loopy intractable graphs [4]. [sent-12, score-0.244]

5 Recently the focus has been directed to the state-of-the art high performance turbo codes [5] and to Gallager and MN codes [6,7]. [sent-13, score-0.543]

6 Statistical physics has been applied to the analysis of ECCs as an alternative to information theory methods yielding some new interesting directions and suggesting new high-performance codes [8]. [sent-14, score-0.333]

7 Sourlas was the first to relate error-correcting codes to spin glass models [9], showing that the Random-energy Model [10] can be thought of as an ideal code capable of saturating Shannon's bound at vanishing code rates. [sent-15, score-0.811]

8 This work was extended recently to the case of finite code rates [11] and has been further developed for analyzing MN codes of various structures [12]. [sent-16, score-0.403]

9 All of the analyzes mentioned above as well as the recent turbo codes analysis [13] relied on the replica approach under the assumption of replica symmetry. [sent-17, score-0.557]

10 To date, the only model that can be analyzed exactly is the REM that corresponds to an impractical coding scheme of a vanishing code rate. [sent-18, score-0.277]

11 Here we present a statistical physics treatment of non-structured Gallager codes by employing a mean-field approximation based on the use of a generalized tree structure (Bethe lattice [14]) known as Husimi cactus that is exactly solvable. [sent-19, score-0.666]

12 In this framework the probability propagation decoding algorithm (PP) emerges naturally providing an alternative view to the relationship between PP decoding and mean-field approximations already observed in [15]. [sent-21, score-0.715]

13 Moreover, this approach has the advantage of being a slightly more controlled and easier to understand than replica calculations. [sent-22, score-0.129]

14 This paper is organized as follows: in the next section we present unstructured Gallager codes and the statistical physics framework to analyze them, in section 3 we make use of the lattice geometry to solve the model exactly. [sent-23, score-0.568]

15 In section 4 we analyze the typical code performance. [sent-24, score-0.197]

16 2 Gallager codes: statistical physics formulation We will concentrate here on a simple communication model whereby messages are represented by binary vectors and are communicated through a Binary Symmetric Channel (BSC) where uncorrelated bit flips appear with probability /. [sent-26, score-0.303]

17 A Gallager code is defined by a binary matrix A = [CI I C 2 ], concatenating two very sparse matrices known to both sender and receiver, with C 2 (of dimensionality (M - N) x (M - N) being invertiblethe matrix C I is of dimensionality (M - N) x N. [sent-27, score-0.159]

18 Encoding refers to the production of an M dimensional binary code word t E {O, l}M (M > N) from the original message E {O,l}N by t = GTe (mod 2), where all operations are performed in the field {a, I} and are indicated by (mod 2). [sent-28, score-0.312]

19 The generator matrix is G = [1 I C 2I C I ] (mod 2), where 1 is the N x N identity matrix, implying that AGT (mod 2) = and that the first N bits oft are set to the message In regular Gallager codes the number of non-zero elements in each row of A is chosen to be exactly K . [sent-29, score-0.35]

20 The number of elements per column is then C = (1 - R)K, where the code rate is R = N I M (for unbiased messages). [sent-30, score-0.159]

21 The encoded vector t is then corrupted by noise represented by the vector, E {O, l}M with components independently drawn from P( () = (1- J)8( () + /8(( - 1). [sent-31, score-0.077]

22 +, Decoding is carried out by multiplying the received message by the matrix A to produce the syndrome vector z = Ar = A, (mod 2) from which an estimate T for the noise vector can be produced. [sent-34, score-0.215]

23 An estimate for the original message is then obtained as the first N bits of r + T (mod 2). [sent-35, score-0.059]

24 The Bayes optimal estimator (also known as marginal posterior maximizer, MPM) for the noise is defined as Tj = argmaxr 1. [sent-36, score-0.077]

25 The performance of this estimator can be measured by the probability of bit error Pb = 1 - 11M ~~1 8[Tj; (j], where 8[;] is Kronecker's delta. [sent-38, score-0.128]

26 Knowing the matrices C 2 and C I , the syndrome vector z and the noise level/it is possible to apply Bayes' theorem and compute the posterior probability P(r I z) 1 = ZX [z = Ar(mod 2)] P(r), (1) ° where X[X] is an indicator function providing 1 if X is true and otherwise. [sent-39, score-0.195]

27 To compute the MPM one has to compute the marginal posterior Ph I z) = ~i#j P(r I z), which in general requires O(2M) operations, thus becoming impractical for long messages. [sent-40, score-0.034]

28 To solve this problem one can use the sparseness of A to design algorithms that require O(M) operations to perform the same task. [sent-41, score-0.041]

29 One of these methods is the probability propagation algorithm (PP), also known as belief propagation or sum-product algorithm [16]. [sent-42, score-0.251]

30 The connection to statistical physics becomes clear when the field {a, I} is replaced by Ising spins {± I} and mod 2 sums by products [9] . [sent-43, score-0.408]

31 The syndrome vector acquires the form of a multi-spin coupling Jp, = TIjE. [sent-44, score-0.13]

32 Figure 1: Husimi cactus with K = 3 and connectivity C = 4. [sent-51, score-0.174]

33 L of a matrix A, which is not necessarily a concatenation of two separate matrices (therefore, defining an unstructured Gallager code), are given by C(f. [sent-53, score-0.04]

34 j=l The external field corresponds to the prior probability over the noise and has the form F = atanh(l- 2J). [sent-62, score-0.212]

35 The resulting Hamiltonian is a multi-spin ferromagnet with finite connectivity in a random field h j = F(j. [sent-66, score-0.1]

36 The decoding process corresponds to finding local magnetizations at temperature,8 = 1, mj = (Tj) (3=1 and calculating estimates as Tj = sgn(mj). [sent-67, score-0.524]

37 In the {± 1} representation the probability of bit error, acquires the form Pb 11M 2M L(j sgn(mj), = 2' - (3) j=l connecting the code performance with the computation of local magnetizations. [sent-68, score-0.387]

38 1 Generalized Bethe lattice: the "usimi cactus A Husimi cactus with connectivity C is generated starting with a polygon of K vertices with one Ising spin in each vertex (generation 0). [sent-70, score-0.669]

39 All spins in a polygon interact through a single coupling . [sent-71, score-0.266]

40 In figure 1 we show the first step in the construction of a Husimi cactus, in a generic step the base spins of the n - 1 generation polygons, numbering (C -l)(K -1), are attached to K -1 vertices ofa generation n polygon. [sent-73, score-0.514]

41 This process is iterated until a maximum generation nmax is reached, the graph is then completed by attaching C uncorrelated branches of nmax generations at their base spins. [sent-74, score-0.639]

42 In that way each spin inside the graph is connected to exactly C polygons. [sent-75, score-0.228]

43 The local magnetization at the centre mj can be obtained by fixing boundary (initial) conditions in the O-th generation and iterating recursion equations until generation nmax is reached. [sent-76, score-0.889]

44 Carrying out the calculation in the thermodynamic limit corresponds to having nmax "" In M generations and M -t 00. [sent-77, score-0.39]

45 Due to the tree-like structure, local quantities far from the boundary can be cal- culated recursively by specifying boundary conditions. [sent-81, score-0.189]

46 The typical decoding performance can therefore be computed exactly without resorting to replica calculations [17]. [sent-82, score-0.588]

47 1I-'Tk Tj jE'c(I-')\k -I) II + FTk] II Pvjh), vEM(j)\l-'jE'c(I-')\k (4) where the trace is over the spins Tj such that j E C(J-L) \ k. [sent-85, score-0.088]

48 The effective field Xvj on a base spin j due to neighbors in polygon v can be written as : exp (-2x . [sent-86, score-0.505]

49 ) = e 2F Pvj( -) (5) Pvj (+)' VJ Combining (4) and (5) one finds the recursion relation: ~ Trh} exp [-(3. [sent-87, score-0.105]

50 11-' ITjE'c(I-')\k Tj + EjE'c(I-')\k(F + EVEMU)\I-' XVj)Tj] Trh} exp [+(3. [sent-88, score-0.034]

51 11-' ITjE'c(I-')\k Tj + EjE£(I-')\k(F + EVEMU)\I-' XVj)Tj] exp(-2xl-'k)=------~~------------------------------------~ (6) By computing the traces and taking (3 -+ XI-'k = atanh 00 one obtains: II [. [sent-89, score-0.102]

52 11-' tanh(F + jE'c(I-')\k L (7) XVj)] VEMU)\I-' The effective local magnetization due to interactions with the nearest neighbors in one branch is given by ml-'j = tanh (x I-'j). [sent-90, score-0.252]

53 The effective local field on a base spin j of a polygon J-L due to C - 1 branches in the previous generation and due to the external field is XI-'j = F + EVEMU)\I-' Xvj; the effective local magnetization is, therefore, ml-'j = tanh(xl-'j). [sent-91, score-0.981]

54 Equation (7) can then be rewritten in terms ofml-'j and ml-'j and the PP equations [7,15,16] can be recovered: ml-'k = tanh (F + L atanh (mVk)) ml-'k = . [sent-92, score-0.173]

55 11-' II ml-'j (8) jE'c(I-')\k vEMU)\1-' Once the magnetizations on the boundary (O-th generation) are assigned, the local magnetization mj in the central site is determined by iterating (8) and computing: mj = tanh (F + L atanh (mVj)) (9) vEMU) 3. [sent-93, score-0.84]

56 3 Probability propagation as extremization of a free-energy The equations (8) describing PP decoding represent extrema of the following free-energy: M-N . [sent-94, score-0.493]

57 5 Figure 2: (a) Mean normalized overlap between the actual noise vector C and decoded noise T for K = 4 and C = 3 (therefore R = 1/4). [sent-113, score-0.154]

58 Theoretical values (D), experimental averages over 20 runs for code word lengths M = 5000 (e) and M = 100 (full line). [sent-114, score-0.159]

59 Shannon's bound (dashed line), information theory upper bound (full line ) and thermodynamic transition obtained numerically (0). [sent-116, score-0.333]

60 Theoretical (0) and experimental (+, M = 5000 averaged over 20 runs) PP decoding transitions are also shown. [sent-117, score-0.285]

61 EM(j) The iteration of the maps (8) is actually one out of many different methods of finding extrema of this free-energy (not necessarily stable) . [sent-123, score-0.051]

62 This observation opens an alternative way for analyzing the performance of a decoding algorithm by studying the landscape (10). [sent-124, score-0.329]

63 1 Macroscopic description The typical macroscopic states of the system during decoding can be described by histograms of the variables mlJ. [sent-126, score-0.367]

64 k averaged over all possible realizations of the noise vector C. [sent-128, score-0.077]

65 The local field distribution at the central site is computed by replacing C - 1 by C in (11), taking into account C polygons in the generation just before the central site, and inserting the distribution P00 (x) . [sent-131, score-0.397]

66 Equations (11) are identical to those obtained by the replica symmetric theory as in [12] . [sent-132, score-0.129]

67 By setting initial (boundary) conditions Po(x) and numerically iterating (11), for C ~ 3 one can find, up to some noise level ls, a single stable fixed point at infinite fields, corresponding to a totally aligned state (successful decoding). [sent-133, score-0.331]

68 At ls a bifurcation occurs and two other fixed points appear, stable and unstable, the former corresponding to a misaligned state (decoding failure). [sent-134, score-0.18]

69 In terms of the free-energy (10), below ls the landscape is dominated by the aligned state that is the global minimum. [sent-136, score-0.248]

70 Above ls a sub-optimal state, corresponding to an exponentially large number of spurious local minima of the Hamiltonian (2), appears and convergence to the totally aligned state becomes a difficult task. [sent-137, score-0.319]

71 At some critical noise level the totally aligned state loses the status of global minimum and the thermodynamic transition occurs. [sent-138, score-0.414]

72 The practical PP decoding is performed by setting initial conditions as ml-'j = 1 - 21, corresponding to the prior probabilities and iterating (8), until stationarity or a maximum number of iterations is attained. [sent-139, score-0.367]

73 The estimate for the noise vector is then produced by computing Tj = sign(mj). [sent-140, score-0.077]

74 At each decoding step the system can be described by histograms of the variables (8), this is equivalent to iterating (11) (a similar idea was presented in [7]). [sent-141, score-0.367]

75 Below ls the process always converges to the successful decoding state, above ls it converges to the successful decoding only if the initial conditions are fine tuned, in general the process converges to the failure state. [sent-142, score-0.914]

76 2a we show the theoretical mean overlap between actual noise C and the estimate T as a function of the noise levell, as well as results obtained with PP decoding. [sent-144, score-0.154]

77 Information theory provides an upper bound for the maximum attainable code rate by equalizing the maximal information contents of the syndrome vector z and of the noise estimate T [7]. [sent-145, score-0.395]

78 The thermodynamic phase transition obtained by finding the stable fixed points of (11) and their free-energies interestingly coincides with this upper bound within the precision of the numerical calculation. [sent-146, score-0.367]

79 Note that the performance predicted by thermodynamics is not practical as it requires O(2M) operations for an exhaustive search for the global minimum of the free-energy. [sent-147, score-0.041]

80 2b we show the thermodynamic transition for K = 6 and compare with the upper bound, Shannon's bound and the theoretical ls values. [sent-149, score-0.425]

81 2 Tree-like approximation and the thermodynamic limit The geometrical structure of a Gallager code defined by the matrix A can be represented by a bipartite graph (Tanner graph) [16] with bit and check nodes. [sent-151, score-0.529]

82 Each column j of A represents a bit node and each row J. [sent-152, score-0.131]

83 L represents a check node, AI-'j = 1 means that there is an edge linking bit j to check J. [sent-153, score-0.207]

84 It is possible to show that for a random ensemble of regular codes, the probability of completing a cycle after walking l edges starting from an arbitrary node is upper bounded by P[l; K, C, M] :-:; l2 Kl 1M. [sent-155, score-0.114]

85 In the thermodynamic limit M -+ 00 the probability P [l; K, C, M] -+ a for any finite l and the bulk of the system is effectively treelike. [sent-157, score-0.204]

86 By mapping each check node to a polygon with K bit nodes as vertices, one can map a Tanner graph into a Husimi lattice that is effectively a tree for any number of generations of order less than In M. [sent-158, score-0.672]

87 It is experimentally observed that the number of iterations of (8) required for convergence does not scale with the system size, therefore, it is expected that the interior of a tree-like lattice approximates a Gallager code with increasing accuracy as the system size increases. [sent-159, score-0.318]

88 5 Conclusions To summarize, we solved exactly, without resorting to the replica method, a system representing a Gallager code on a Husimi cactus. [sent-162, score-0.328]

89 The results obtained are in agreement with the replica symmetric calculation and with numerical experiments carried out in systems of moderate size. [sent-163, score-0.164]

90 New insights on the decoding process are obtained by looking at a proper free-energy landscape. [sent-165, score-0.285]

91 We believe that methods of statistical physics are complimentary to those used in the statistical inference community and can enhance our understanding of general graphical models. [sent-166, score-0.089]

92 , (1982) Convergence condition of the TAP equation for the infinite-ranged Ising spin glass model. [sent-169, score-0.165]

93 MacKay (1998) A revolution: belief propagation in graphs with cycles. [sent-192, score-0.106]

94 (1999) Good error-correcting codes based on very sparse matrices, IEEE Transactions on Information Theory 45, 399-431. [sent-211, score-0.244]

95 (1981) Random-energy model: an exactly solvable model of disordered systems, Physical Review B 24(5),2613-2626. [sent-218, score-0.098]

96 Sourlas (2000) The statistical mechanics of turbo codes, European Physical Journal B 18,107-119. [sent-229, score-0.055]

97 Wong (1987) Graph bipartitioning and the Bethe spin glass, Journal of Physics A 20, L 785-L791. [sent-234, score-0.124]

98 TAP for decoding corrupted messages, Europhysics Letters 44 (5), 668-674. [sent-238, score-0.285]

99 Frey, (1998) Iterative decoding of compound codes by probability probagation in graphical models, IEEE Journal on Selected Areas in Comm. [sent-243, score-0.568]

100 (1995) Bethe or Bethe-like lattice calculations are more reliable than conventional mean-field calculations , Physical Review Letters 74 (5) , 809-812. [sent-247, score-0.257]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('decoding', 0.285), ('tj', 0.264), ('gallager', 0.263), ('codes', 0.244), ('mod', 0.178), ('polygon', 0.178), ('thermodynamic', 0.165), ('code', 0.159), ('lattice', 0.159), ('husimi', 0.152), ('generation', 0.144), ('mj', 0.139), ('ls', 0.139), ('replica', 0.129), ('cactus', 0.127), ('xvj', 0.127), ('spin', 0.124), ('pp', 0.106), ('propagation', 0.106), ('atanh', 0.102), ('nmax', 0.102), ('polygons', 0.102), ('bit', 0.089), ('physics', 0.089), ('bethe', 0.088), ('generations', 0.088), ('hamiltonian', 0.088), ('magnetization', 0.088), ('spins', 0.088), ('iterating', 0.082), ('syndrome', 0.079), ('kabashima', 0.079), ('noise', 0.077), ('evemu', 0.076), ('pvj', 0.076), ('sourlas', 0.076), ('vemu', 0.076), ('saad', 0.073), ('physical', 0.072), ('base', 0.072), ('tanh', 0.071), ('recursion', 0.071), ('boundary', 0.07), ('totally', 0.066), ('vertices', 0.066), ('aligned', 0.065), ('message', 0.059), ('check', 0.059), ('graph', 0.057), ('shannon', 0.055), ('turbo', 0.055), ('field', 0.053), ('messages', 0.052), ('acquires', 0.051), ('disordered', 0.051), ('extrema', 0.051), ('extremization', 0.051), ('gte', 0.051), ('magnetizations', 0.051), ('tanner', 0.051), ('trh', 0.051), ('vicente', 0.051), ('calculations', 0.049), ('ising', 0.049), ('mn', 0.049), ('site', 0.049), ('local', 0.049), ('bound', 0.047), ('exactly', 0.047), ('connectivity', 0.047), ('fields', 0.046), ('effective', 0.044), ('landscape', 0.044), ('macroscopic', 0.044), ('mpm', 0.044), ('review', 0.043), ('external', 0.043), ('node', 0.042), ('glass', 0.041), ('operations', 0.041), ('stable', 0.041), ('transition', 0.041), ('coincides', 0.04), ('unstructured', 0.04), ('branches', 0.04), ('resorting', 0.04), ('probability', 0.039), ('typical', 0.038), ('vanishing', 0.037), ('pb', 0.037), ('geometry', 0.036), ('calculation', 0.035), ('uncorrelated', 0.034), ('frey', 0.034), ('fa', 0.034), ('impractical', 0.034), ('exp', 0.034), ('letters', 0.034), ('successful', 0.033), ('upper', 0.033)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999994 47 nips-2000-Error-correcting Codes on a Bethe-like Lattice

Author: Renato Vicente, David Saad, Yoshiyuki Kabashima

Abstract: We analyze Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as an extremization of a proper free-energy. We find a thermodynamic phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape.

2 0.16458525 126 nips-2000-Stagewise Processing in Error-correcting Codes and Image Restoration

Author: K. Y. Michael Wong, Hidetoshi Nishimori

Abstract: We introduce stagewise processing in error-correcting codes and image restoration, by extracting information from the former stage and using it selectively to improve the performance of the latter one. Both mean-field analysis using the cavity method and simulations show that it has the advantage of being robust against uncertainties in hyperparameter estimation. 1

3 0.1631591 68 nips-2000-Improved Output Coding for Classification Using Continuous Relaxation

Author: Koby Crammer, Yoram Singer

Abstract: Output coding is a general method for solving multiclass problems by reducing them to multiple binary classification problems. Previous research on output coding has employed, almost solely, predefined discrete codes. We describe an algorithm that improves the performance of output codes by relaxing them to continuous codes. The relaxation procedure is cast as an optimization problem and is reminiscent of the quadratic program for support vector machines. We describe experiments with the proposed algorithm, comparing it to standard discrete output codes. The experimental results indicate that continuous relaxations of output codes often improve the generalization performance, especially for short codes.

4 0.10706961 62 nips-2000-Generalized Belief Propagation

Author: Jonathan S. Yedidia, William T. Freeman, Yair Weiss

Abstract: Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it finds for general graphs. We show that BP can only converge to a stationary point of an approximate free energy, known as the Bethe free energy in statistical physics. This result characterizes BP fixed-points and makes connections with variational approaches to approximate inference. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe's approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe's approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions ofthese Kikuchi approximations. These new message passing algorithms can be significantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP. 1

5 0.097753696 140 nips-2000-Tree-Based Modeling and Estimation of Gaussian Processes on Graphs with Cycles

Author: Martin J. Wainwright, Erik B. Sudderth, Alan S. Willsky

Abstract: We present the embedded trees algorithm, an iterative technique for estimation of Gaussian processes defined on arbitrary graphs. By exactly solving a series of modified problems on embedded spanning trees, it computes the conditional means with an efficiency comparable to or better than other techniques. Unlike other methods, the embedded trees algorithm also computes exact error covariances. The error covariance computation is most efficient for graphs in which removing a small number of edges reveals an embedded tree. In this context, we demonstrate that sparse loopy graphs can provide a significant increase in modeling power relative to trees, with only a minor increase in estimation complexity. 1

6 0.093945086 77 nips-2000-Learning Curves for Gaussian Processes Regression: A Framework for Good Approximations

7 0.09144529 25 nips-2000-Analysis of Bit Error Probability of Direct-Sequence CDMA Multiuser Demodulators

8 0.090141006 115 nips-2000-Sequentially Fitting ``Inclusive'' Trees for Inference in Noisy-OR Networks

9 0.087482482 13 nips-2000-A Tighter Bound for Graphical Models

10 0.085857071 114 nips-2000-Second Order Approximations for Probability Models

11 0.083226994 15 nips-2000-Accumulator Networks: Suitors of Local Probability Propagation

12 0.07173869 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning

13 0.067552783 76 nips-2000-Learning Continuous Distributions: Simulations With Field Theoretic Priors

14 0.064106867 109 nips-2000-Redundancy and Dimensionality Reduction in Sparse-Distributed Representations of Natural Objects in Terms of Their Local Features

15 0.063999429 12 nips-2000-A Support Vector Method for Clustering

16 0.061406277 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images

17 0.05794818 14 nips-2000-A Variational Mean-Field Theory for Sigmoidal Belief Networks

18 0.054755747 19 nips-2000-Adaptive Object Representation with Hierarchically-Distributed Memory Sites

19 0.05324332 103 nips-2000-Probabilistic Semantic Video Indexing

20 0.048545975 145 nips-2000-Weak Learners and Improved Rates of Convergence in Boosting


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.197), (1, -0.009), (2, 0.08), (3, -0.019), (4, 0.133), (5, -0.02), (6, 0.033), (7, 0.012), (8, 0.028), (9, 0.167), (10, -0.008), (11, 0.112), (12, -0.179), (13, 0.154), (14, -0.134), (15, 0.057), (16, -0.159), (17, -0.031), (18, 0.199), (19, -0.322), (20, 0.023), (21, 0.085), (22, 0.033), (23, 0.055), (24, -0.022), (25, -0.028), (26, 0.314), (27, -0.031), (28, 0.155), (29, -0.031), (30, -0.04), (31, 0.006), (32, -0.012), (33, -0.058), (34, 0.053), (35, -0.116), (36, 0.01), (37, 0.036), (38, -0.089), (39, 0.004), (40, -0.036), (41, -0.02), (42, 0.024), (43, -0.087), (44, 0.061), (45, 0.015), (46, -0.028), (47, -0.045), (48, -0.032), (49, 0.042)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96501976 47 nips-2000-Error-correcting Codes on a Bethe-like Lattice

Author: Renato Vicente, David Saad, Yoshiyuki Kabashima

Abstract: We analyze Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as an extremization of a proper free-energy. We find a thermodynamic phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape.

2 0.80217588 25 nips-2000-Analysis of Bit Error Probability of Direct-Sequence CDMA Multiuser Demodulators

Author: Toshiyuki Tanaka

Abstract: We analyze the bit error probability of multiuser demodulators for directsequence binary phase-shift-keying (DSIBPSK) CDMA channel with additive gaussian noise. The problem of multiuser demodulation is cast into the finite-temperature decoding problem, and replica analysis is applied to evaluate the performance of the resulting MPM (Marginal Posterior Mode) demodulators, which include the optimal demodulator and the MAP demodulator as special cases. An approximate implementation of demodulators is proposed using analog-valued Hopfield model as a naive mean-field approximation to the MPM demodulators, and its performance is also evaluated by the replica analysis. Results of the performance evaluation shows effectiveness of the optimal demodulator and the mean-field demodulator compared with the conventional one, especially in the cases of small information bit rate and low noise level. 1

3 0.70083153 126 nips-2000-Stagewise Processing in Error-correcting Codes and Image Restoration

Author: K. Y. Michael Wong, Hidetoshi Nishimori

Abstract: We introduce stagewise processing in error-correcting codes and image restoration, by extracting information from the former stage and using it selectively to improve the performance of the latter one. Both mean-field analysis using the cavity method and simulations show that it has the advantage of being robust against uncertainties in hyperparameter estimation. 1

4 0.53711545 68 nips-2000-Improved Output Coding for Classification Using Continuous Relaxation

Author: Koby Crammer, Yoram Singer

Abstract: Output coding is a general method for solving multiclass problems by reducing them to multiple binary classification problems. Previous research on output coding has employed, almost solely, predefined discrete codes. We describe an algorithm that improves the performance of output codes by relaxing them to continuous codes. The relaxation procedure is cast as an optimization problem and is reminiscent of the quadratic program for support vector machines. We describe experiments with the proposed algorithm, comparing it to standard discrete output codes. The experimental results indicate that continuous relaxations of output codes often improve the generalization performance, especially for short codes.

5 0.37415299 109 nips-2000-Redundancy and Dimensionality Reduction in Sparse-Distributed Representations of Natural Objects in Terms of Their Local Features

Author: Penio S. Penev

Abstract: Low-dimensional representations are key to solving problems in highlevel vision, such as face compression and recognition. Factorial coding strategies for reducing the redundancy present in natural images on the basis of their second-order statistics have been successful in accounting for both psychophysical and neurophysiological properties of early vision. Class-specific representations are presumably formed later, at the higher-level stages of cortical processing. Here we show that when retinotopic factorial codes are derived for ensembles of natural objects, such as human faces, not only redundancy, but also dimensionality is reduced. We also show that objects are built from parts in a non-Gaussian fashion which allows these local-feature codes to have dimensionalities that are substantially lower than the respective Nyquist sampling rates.

6 0.36176646 62 nips-2000-Generalized Belief Propagation

7 0.33233964 140 nips-2000-Tree-Based Modeling and Estimation of Gaussian Processes on Graphs with Cycles

8 0.31342176 115 nips-2000-Sequentially Fitting ``Inclusive'' Trees for Inference in Noisy-OR Networks

9 0.29280907 15 nips-2000-Accumulator Networks: Suitors of Local Probability Propagation

10 0.24460219 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images

11 0.24006389 76 nips-2000-Learning Continuous Distributions: Simulations With Field Theoretic Priors

12 0.22523957 77 nips-2000-Learning Curves for Gaussian Processes Regression: A Framework for Good Approximations

13 0.2087118 14 nips-2000-A Variational Mean-Field Theory for Sigmoidal Belief Networks

14 0.20868836 103 nips-2000-Probabilistic Semantic Video Indexing

15 0.20511682 114 nips-2000-Second Order Approximations for Probability Models

16 0.20342492 12 nips-2000-A Support Vector Method for Clustering

17 0.19446005 125 nips-2000-Stability and Noise in Biochemical Switches

18 0.18713249 13 nips-2000-A Tighter Bound for Graphical Models

19 0.17936188 19 nips-2000-Adaptive Object Representation with Hierarchically-Distributed Memory Sites

20 0.17672583 138 nips-2000-The Use of Classifiers in Sequential Inference


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.023), (17, 0.098), (32, 0.017), (33, 0.032), (34, 0.011), (54, 0.013), (55, 0.026), (62, 0.026), (65, 0.076), (67, 0.076), (75, 0.015), (76, 0.065), (81, 0.025), (90, 0.02), (91, 0.02), (92, 0.323), (97, 0.02), (99, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.88996142 62 nips-2000-Generalized Belief Propagation

Author: Jonathan S. Yedidia, William T. Freeman, Yair Weiss

Abstract: Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it finds for general graphs. We show that BP can only converge to a stationary point of an approximate free energy, known as the Bethe free energy in statistical physics. This result characterizes BP fixed-points and makes connections with variational approaches to approximate inference. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe's approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe's approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions ofthese Kikuchi approximations. These new message passing algorithms can be significantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP. 1

same-paper 2 0.84325325 47 nips-2000-Error-correcting Codes on a Bethe-like Lattice

Author: Renato Vicente, David Saad, Yoshiyuki Kabashima

Abstract: We analyze Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as an extremization of a proper free-energy. We find a thermodynamic phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape.

3 0.43043196 68 nips-2000-Improved Output Coding for Classification Using Continuous Relaxation

Author: Koby Crammer, Yoram Singer

Abstract: Output coding is a general method for solving multiclass problems by reducing them to multiple binary classification problems. Previous research on output coding has employed, almost solely, predefined discrete codes. We describe an algorithm that improves the performance of output codes by relaxing them to continuous codes. The relaxation procedure is cast as an optimization problem and is reminiscent of the quadratic program for support vector machines. We describe experiments with the proposed algorithm, comparing it to standard discrete output codes. The experimental results indicate that continuous relaxations of output codes often improve the generalization performance, especially for short codes.

4 0.41075575 122 nips-2000-Sparse Representation for Gaussian Process Models

Author: Lehel Csatč´¸, Manfred Opper

Abstract: We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world data sets indicate the efficiency of the approach.

5 0.40340063 134 nips-2000-The Kernel Trick for Distances

Author: Bernhard Schölkopf

Abstract: A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as norm-based distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

6 0.40267378 146 nips-2000-What Can a Single Neuron Compute?

7 0.40196204 15 nips-2000-Accumulator Networks: Suitors of Local Probability Propagation

8 0.40154499 37 nips-2000-Convergence of Large Margin Separable Linear Classification

9 0.39932904 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

10 0.39925617 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning

11 0.39738923 64 nips-2000-High-temperature Expansions for Learning Models of Nonnegative Data

12 0.39708188 95 nips-2000-On a Connection between Kernel PCA and Metric Multidimensional Scaling

13 0.39562178 74 nips-2000-Kernel Expansions with Unlabeled Examples

14 0.39319962 111 nips-2000-Regularized Winnow Methods

15 0.39268285 79 nips-2000-Learning Segmentation by Random Walks

16 0.39039764 46 nips-2000-Ensemble Learning and Linear Response Theory for ICA

17 0.3862648 13 nips-2000-A Tighter Bound for Graphical Models

18 0.38562545 49 nips-2000-Explaining Away in Weight Space

19 0.38539228 60 nips-2000-Gaussianization

20 0.38477316 36 nips-2000-Constrained Independent Component Analysis