nips nips2001 nips2001-196 knowledge-graph by maker-knowledge-mining

196 nips-2001-Very loopy belief propagation for unwrapping phase images


Source: pdf

Author: Brendan J. Frey, Ralf Koetter, Nemanja Petrovic

Abstract: Since the discovery that the best error-correcting decoding algorithm can be viewed as belief propagation in a cycle-bound graph, researchers have been trying to determine under what circumstances

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Very loopy belief propagation for unwrapping phase images Brendan J . [sent-1, score-1.383]

2 Despite several theoretical advances in our understanding of loopy belief propagation, to our knowledge, the only problem that has been solved using loopy belief propagation is error-correcting decoding on Gaussian channels. [sent-6, score-1.211]

3 We propose a new representation for the two-dimensional phase unwrapping problem, and we show that loopy belief propagation produces results that are superior to existing techniques. [sent-7, score-1.357]

4 This is an important result, since many imaging techniques, including magnetic resonance imaging and interferometric synthetic aperture radar, produce phase-wrapped images. [sent-8, score-0.376]

5 Interestingly, the graph that we use has a very large number of very short cycles, supporting evidence that a large minimum cycle length is not needed for excellent results using belief propagation. [sent-9, score-0.263]

6 1 Introduction Phase unwrapping is an easily stated, fundamental problem in image processing (Ghiglia and Pritt 1998). [sent-10, score-0.596]

7 Each real-valued observation on a 1- or 2-dimensional grid is measured modulus a known wavelength, which we take to be 1 without loss of generality. [sent-11, score-0.077]

8 Ib shows the wrapped, I-dimensional waveform obtained from the original waveform shown in Fig. [sent-13, score-0.25]

9 Every time the original waveform goes above 1 or below 0, it is wrapped to 0 or 1, respectively. [sent-15, score-0.354]

10 The goal of phase unwrapping is to infer the original, unwrapped curve from the wrapped measurements, using using knowledge about which signals are more probable a priori. [sent-16, score-1.131]

11 In two dimensions, exact phase unwrapping is exponentially more difficult than 1dimensional phase unwrapping and has been shown to be NP-hard in general (Chen and Zebker 2000). [sent-17, score-1.372]

12 lc shows the wrapped output of a magnetic resonance imaging device, courtesy of Z. [sent-19, score-0.501]

13 Notice the "fringe lines" - boundaries across which wrappings have occurred. [sent-22, score-0.237]

14 Id shows the wrapped terrain height measurements from an interferometric synthetic aperture radar, courtesy of Sandia National Laboratories, New Mexico. [sent-24, score-0.487]

15 (a) (b) (d) Figure 1: (a) A waveform measured on a 1-dimensional grid. [sent-25, score-0.151]

16 (b) The phase-wrapped version ofthe waveform in (a), where the wavelength is 1. [sent-26, score-0.194]

17 (c) A wrapped intensity ma p from a magnetic resonance imaging device, measured on a 2-dimensional grid (courtesy of Z . [sent-27, score-0.505]

18 (d) A wrapped topographic map measured on a 2-dimensional grid (courtesy of Sandia National Laboratories, New Mexico) . [sent-30, score-0.306]

19 A sensible goal in phase unwrapping is to infer the gradient field of the original surface. [sent-31, score-0.843]

20 Equivalently, the goal is to infer the number of relative wrappings, or integer "shifts", between every pair of neighboring measurements. [sent-33, score-0.196]

21 Positive shifts correspond to an increase in the number of wrappings in the direction of the x or y coordinate, whereas negative shifts correspond to a decrease in the number of wrappings in the direction of the x or y coordinate. [sent-34, score-1.06]

22 After arbitrarily assigning an absolute number of wrappings to one point, the absolute number of wrappings at any other point can be determined by summing the shifts along a path connecting the two points. [sent-35, score-0.771]

23 To account for direction, when taking a step against the direction of the coordinate, the shift should be subtracted. [sent-36, score-0.115]

24 When neighboring measurements are more likely closer together than farther apart a priori, I-dimensional waveforms can be unwrapped optimally in time that is linear in the waveform length. [sent-37, score-0.458]

25 For every pair of neighboring measurements, the shift that makes the unwrapped values as close together as possible is chosen. [sent-38, score-0.361]

26 For 2-dimensional surfaces and images, there are many possible I-dimensional paths between any two points. [sent-44, score-0.025]

27 These paths should be examined in combination, since the sum of the shifts along every such path should be equal. [sent-45, score-0.375]

28 Viewing the shifts as state variables, the cut-set between any two points is exponential in the size of the grid, making exact inference for general priors NP-hard (Chen and Zebker 2000). [sent-46, score-0.266]

29 The two leading fully-automated techniques for phase unwrapping are the least squares method and the branch cut technique (Ghiglia and Pritt 1998). [sent-47, score-0.92]

30 (Some other techniques perform better in some circumstances, but need additional information or require hand-tweaking. [sent-48, score-0.024]

31 ) The least squares method begins by making a greedy guess at the gradient between every pair of neighboring points. [sent-49, score-0.286]

32 The resulting vector field is not the gradient field of a surface, since in a valid gradient field, the sum of the gradients around every closed loop must be zero (that is, the curl must be 0). [sent-50, score-0.446]

33 The least squares method proceeds by projecting the vector field onto the linear subspace of gradient fields. [sent-60, score-0.179]

34 The branch cut technique also begins with greedy decisions for the gradients and then identifies untrustworthy regions of the image whose gradients should not be used during integration. [sent-62, score-0.34]

35 As shown in our results section, both of these techniques are suboptimal. [sent-63, score-0.024]

36 Previously, we attempted to use a relaxed mean field technique to solve this problem (Achan, Frey and Koetter 2001). [sent-64, score-0.091]

37 2001), we introduce a new framework for quantitative evaluation, which impressively places belief propagation much closer to the theoretical limit than other leading methods. [sent-67, score-0.538]

38 the sum-product algorithm, probability propagation) is exact in graphs that are trees (Pearl 1988), but it has been discovered only recently that it can produce excellent results in graphs with many cycles. [sent-71, score-0.104]

39 Impressive results have been obtained using loopy belief propagation for super-resolution (Freeman and Pasztor 1999) and for infering layered representations of scenes (Frey 2000) . [sent-72, score-0.749]

40 However, despite several theoretical advances in our understanding of loopy belief propagation (c. [sent-73, score-0.696]

41 (Weiss and Freeman 2001)) and proposals for modifications to the algorithm (c. [sent-75, score-0.023]

42 (Yedidia, Freeman and Weiss 2001)) , to our knowledge, the only problem that has been solved by loopy belief propagation is error-correcting decoding on Gaussian channels. [sent-77, score-0.757]

43 We conjecture that although phase unwrapping is generally NP-hard, there exists a near-optimal phase unwrapping algorithm for Gaussian process priors. [sent-78, score-1.401]

44 Further, we believe that algorithm to be loopy belief propagation. [sent-79, score-0.429]

45 2 Loopy Belief Propagation for Phase Unwrapping As described above, the goal is to infer the number of relative wrappings , or integer "shifts" , between every pair of neighboring measurements. [sent-80, score-0.433]

46 Denote the x-direction shift at (x,y) by a(x , y) and the y-direction shift at (x , y) by b(x , y), as shown in Fig. [sent-81, score-0.176]

47 If the sum of the shifts around every short loop of 4 shifts (e. [sent-83, score-0.628]

48 2a) is zero, then perturbing a path will not change the sum of the shifts along the path. [sent-86, score-0.319]

49 So, a valid set of shifts S = {a(x,y) , b(x , y) : x = 1, . [sent-87, score-0.266]

50 , M -I} in an N x M image must satisfy the constraint a(x,y) + b(x + l,y) - a(x,y + 1) - b(x,y) = 0, (1) for x = 1, . [sent-93, score-0.046]

51 Since a(x, y) +b(x+ 1, y) -a(x, y+ 1) -b(x, y) is a measure of curl at (x, y), we refer to (1) as a "zero-curl constraint", reflecting the fact that the curl of a gradient field is O. [sent-100, score-0.26]

52 In this way, phase unwrapping is formulated as the problem of inferring the most probable set of shifts subject to satisfying all zero-curl constraints. [sent-101, score-0.981]

53 We assume that given the set of shifts, the unwrapped surface is described by a loworder Gaussian process. [sent-102, score-0.204]

54 The joint distribution over the shifts S = {a(x, y), b(x, y) : x = 1, . [sent-103, score-0.266]

55 , M - I} and the wrapped measurements (x+l,y)-c/>(x,y)-a(x,y))2/ 2u 2 M-l II II e-(C/>(x,y+1)-c/>(x,y)-b(x,y))2/ 2u 2 . [sent-109, score-0.315]

56 ), which evaluates to 1 if its argument is oand evaluates to 0 otherwise. [sent-111, score-0.062]

57 We assume t he slope of t he surface is limited so t hat t he unknown shifts take on t he values -1 , 0 and 1. [sent-112, score-0.409]

58 a 2 is t he variance between two neighboring measurements in t he unwrapped image, but we find t hat in practice it can be estimated directly from t he wrapped image. [sent-113, score-0.576]

59 Phase unwrapping consists of making inferences about t he a's and b's in t he above probability model. [sent-114, score-0.527]

60 For example, t he marginal probability t hat t he x-direction shift at (x,y) is k given an observed wrapped image , is P (a (x,y) = kl . [sent-115, score-0.411]

61 (The plot for the mean squared error in t he surface heights looks similar. [sent-117, score-0.072]

62 -+ 00, unwrapping becomes trivial (since no wrappings occur), so algorithms have waterfall-shaped curves. [sent-120, score-0.764]

63 The belief propagation algorithm clearly obtains significantly lower reconstruction errors. [sent-121, score-0.596]

64 Viewed another way, belief propagation can tolerate much lower wrapping wavelengths for a given reconstruction error. [sent-122, score-0.574]

65 Also, it turns out that for this surface, it is impossible for an algorithm that infers relative shifts of -1,0 and 1 to obtain a reconstruction error of 0, unless A ::::: 12. [sent-123, score-0.368]

66 Belief propagation obtains a zero-error wavelength that is significantly closer to this limit than the least squares method and the branch cuts technique. [sent-125, score-0.596]

67 4 Conclusions Phase unwrapping is a fundamental problem in image processing and although it has been shown to be NP-hard for general priors (Chen and Zebker 2000), we conjecture there exists a near-optimal phase unwrapping algorithm for Gaussian process priors. [sent-126, score-1.311]

68 Further, we believe that algorithm to be loopy belief propagation. [sent-127, score-0.429]

69 The belief propagation algorithm runs in about the same time as the other techniques. [sent-129, score-0.479]

70 A factorized variational technique for phase unwrapping in Markov random fields. [sent-135, score-0.716]

71 Network approaches to two-dimensional phase unwrapping: intractability and two new algorithms. [sent-143, score-0.159]

72 In Proceedings of the Int ernational Conference on Computer Vision, pages 1182- 1189. [sent-149, score-0.023]

73 Filling in scenes by propagating probabilities through layers and into appearance models. [sent-153, score-0.074]

74 Iterative decoding of compound codes by probability propagation in graphical models. [sent-186, score-0.376]

75 On the optimaility of solutions of the max-product belief propagation algorithm in artbitrary graphs. [sent-218, score-0.479]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('unwrapping', 0.527), ('shifts', 0.266), ('propagation', 0.242), ('wrappings', 0.237), ('belief', 0.237), ('wrapped', 0.229), ('loopy', 0.192), ('phase', 0.159), ('koetter', 0.158), ('frey', 0.139), ('ghiglia', 0.132), ('pritt', 0.132), ('unwrapped', 0.132), ('zebker', 0.132), ('waveform', 0.125), ('shift', 0.088), ('decoding', 0.086), ('measurements', 0.086), ('kschischang', 0.084), ('chen', 0.083), ('neighboring', 0.081), ('freeman', 0.079), ('curl', 0.079), ('imaging', 0.078), ('branch', 0.073), ('courtesy', 0.073), ('surface', 0.072), ('wavelength', 0.069), ('gradients', 0.067), ('magnetic', 0.063), ('field', 0.061), ('resonance', 0.058), ('mackay', 0.058), ('weiss', 0.058), ('infer', 0.055), ('squares', 0.055), ('achan', 0.053), ('interferometric', 0.053), ('loeliger', 0.053), ('pasztor', 0.053), ('radar', 0.053), ('sandia', 0.053), ('wiberg', 0.053), ('grid', 0.051), ('reconstruction', 0.049), ('hat', 0.048), ('codes', 0.048), ('image', 0.046), ('pearl', 0.046), ('petrovic', 0.046), ('cheng', 0.046), ('aperture', 0.046), ('loop', 0.043), ('obtains', 0.042), ('propagating', 0.042), ('mceliece', 0.042), ('gradient', 0.041), ('graphs', 0.039), ('communications', 0.037), ('laboratories', 0.037), ('closer', 0.034), ('cuts', 0.033), ('circumstances', 0.032), ('device', 0.032), ('scenes', 0.032), ('iterative', 0.032), ('every', 0.031), ('path', 0.031), ('neal', 0.031), ('evaluates', 0.031), ('yedidia', 0.031), ('technique', 0.03), ('cut', 0.03), ('impossible', 0.03), ('probable', 0.029), ('conjecture', 0.029), ('pair', 0.029), ('begins', 0.027), ('direction', 0.027), ('impressive', 0.026), ('significantly', 0.026), ('images', 0.026), ('measured', 0.026), ('excellent', 0.026), ('theoretical', 0.025), ('paths', 0.025), ('techniques', 0.024), ('fundamental', 0.023), ('ernational', 0.023), ('proposals', 0.023), ('urbana', 0.023), ('layered', 0.023), ('infers', 0.023), ('wavelengths', 0.023), ('infering', 0.023), ('filling', 0.023), ('tolerate', 0.023), ('slope', 0.023), ('least', 0.022), ('sum', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999994 196 nips-2001-Very loopy belief propagation for unwrapping phase images

Author: Brendan J. Frey, Ralf Koetter, Nemanja Petrovic

Abstract: Since the discovery that the best error-correcting decoding algorithm can be viewed as belief propagation in a cycle-bound graph, researchers have been trying to determine under what circumstances

2 0.18162327 188 nips-2001-The Unified Propagation and Scaling Algorithm

Author: Yee W. Teh, Max Welling

Abstract: In this paper we will show that a restricted class of constrained minimum divergence problems, named generalized inference problems, can be solved by approximating the KL divergence with a Bethe free energy. The algorithm we derive is closely related to both loopy belief propagation and iterative scaling. This unified propagation and scaling algorithm reduces to a convergent alternative to loopy belief propagation when no constraints are present. Experiments show the viability of our algorithm.

3 0.17992048 97 nips-2001-Information-Geometrical Significance of Sparsity in Gallager Codes

Author: Toshiyuki Tanaka, Shiro Ikeda, Shun-ichi Amari

Abstract: We report a result of perturbation analysis on decoding error of the belief propagation decoder for Gallager codes. The analysis is based on information geometry, and it shows that the principal term of decoding error at equilibrium comes from the m-embedding curvature of the log-linear submanifold spanned by the estimated pseudoposteriors, one for the full marginal, and K for partial posteriors, each of which takes a single check into account, where K is the number of checks in the Gallager code. It is then shown that the principal error term vanishes when the parity-check matrix of the code is so sparse that there are no two columns with overlap greater than 1. 1

4 0.11907539 98 nips-2001-Information Geometrical Framework for Analyzing Belief Propagation Decoder

Author: Shiro Ikeda, Toshiyuki Tanaka, Shun-ichi Amari

Abstract: The mystery of belief propagation (BP) decoder, especially of the turbo decoding, is studied from information geometrical viewpoint. The loopy belief network (BN) of turbo codes makes it difficult to obtain the true “belief” by BP, and the characteristics of the algorithm and its equilibrium are not clearly understood. Our study gives an intuitive understanding of the mechanism, and a new framework for the analysis. Based on the framework, we reveal basic properties of the turbo decoding.

5 0.092428781 192 nips-2001-Tree-based reparameterization for approximate inference on loopy graphs

Author: Martin J. Wainwright, Tommi Jaakkola, Alan S. Willsky

Abstract: We develop a tree-based reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of reparameterization. More generally, we consider algorithms that perform exact computations over spanning trees of the full graph. On the practical side, we find that such tree reparameterization (TRP) algorithms have convergence properties superior to BP. The reparameterization perspective also provides a number of theoretical insights into approximate inference, including a new characterization of fixed points; and an invariance intrinsic to TRP /BP. These two properties enable us to analyze and bound the error between the TRP /BP approximations and the actual marginals. While our results arise naturally from the TRP perspective, most of them apply in an algorithm-independent manner to any local minimum of the Bethe free energy. Our results also have natural extensions to more structured approximations [e.g. , 1, 2]. 1

6 0.077126622 178 nips-2001-TAP Gibbs Free Energy, Belief Propagation and Sparsity

7 0.065512799 75 nips-2001-Fast, Large-Scale Transformation-Invariant Clustering

8 0.063346937 132 nips-2001-Novel iteration schemes for the Cluster Variation Method

9 0.047349438 153 nips-2001-Product Analysis: Learning to Model Observations as Products of Hidden Variables

10 0.04361666 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

11 0.040204816 119 nips-2001-Means, Correlations and Bounds

12 0.038565267 13 nips-2001-A Natural Policy Gradient

13 0.037148014 39 nips-2001-Audio-Visual Sound Separation Via Hidden Markov Models

14 0.036586352 123 nips-2001-Modeling Temporal Structure in Classical Conditioning

15 0.035634294 102 nips-2001-KLD-Sampling: Adaptive Particle Filters

16 0.034829326 89 nips-2001-Grouping with Bias

17 0.034335081 84 nips-2001-Global Coordination of Local Linear Models

18 0.032622132 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

19 0.032181177 17 nips-2001-A Quantitative Model of Counterfactual Reasoning

20 0.03146565 10 nips-2001-A Hierarchical Model of Complex Cells in Visual Cortex for the Binocular Perception of Motion-in-Depth


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.109), (1, -0.026), (2, 0.004), (3, -0.068), (4, -0.034), (5, -0.211), (6, 0.006), (7, -0.191), (8, 0.176), (9, 0.133), (10, -0.1), (11, -0.032), (12, 0.023), (13, 0.022), (14, 0.012), (15, 0.035), (16, 0.019), (17, 0.005), (18, -0.038), (19, 0.06), (20, 0.016), (21, -0.011), (22, -0.034), (23, -0.072), (24, -0.025), (25, -0.031), (26, -0.002), (27, -0.005), (28, -0.07), (29, -0.083), (30, 0.037), (31, 0.011), (32, 0.011), (33, 0.074), (34, 0.001), (35, -0.131), (36, -0.023), (37, 0.088), (38, -0.071), (39, -0.036), (40, 0.092), (41, -0.08), (42, 0.005), (43, 0.067), (44, 0.108), (45, 0.087), (46, 0.001), (47, 0.032), (48, -0.046), (49, 0.137)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97249722 196 nips-2001-Very loopy belief propagation for unwrapping phase images

Author: Brendan J. Frey, Ralf Koetter, Nemanja Petrovic

Abstract: Since the discovery that the best error-correcting decoding algorithm can be viewed as belief propagation in a cycle-bound graph, researchers have been trying to determine under what circumstances

2 0.74549639 188 nips-2001-The Unified Propagation and Scaling Algorithm

Author: Yee W. Teh, Max Welling

Abstract: In this paper we will show that a restricted class of constrained minimum divergence problems, named generalized inference problems, can be solved by approximating the KL divergence with a Bethe free energy. The algorithm we derive is closely related to both loopy belief propagation and iterative scaling. This unified propagation and scaling algorithm reduces to a convergent alternative to loopy belief propagation when no constraints are present. Experiments show the viability of our algorithm.

3 0.7266283 192 nips-2001-Tree-based reparameterization for approximate inference on loopy graphs

Author: Martin J. Wainwright, Tommi Jaakkola, Alan S. Willsky

Abstract: We develop a tree-based reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of reparameterization. More generally, we consider algorithms that perform exact computations over spanning trees of the full graph. On the practical side, we find that such tree reparameterization (TRP) algorithms have convergence properties superior to BP. The reparameterization perspective also provides a number of theoretical insights into approximate inference, including a new characterization of fixed points; and an invariance intrinsic to TRP /BP. These two properties enable us to analyze and bound the error between the TRP /BP approximations and the actual marginals. While our results arise naturally from the TRP perspective, most of them apply in an algorithm-independent manner to any local minimum of the Bethe free energy. Our results also have natural extensions to more structured approximations [e.g. , 1, 2]. 1

4 0.60540903 98 nips-2001-Information Geometrical Framework for Analyzing Belief Propagation Decoder

Author: Shiro Ikeda, Toshiyuki Tanaka, Shun-ichi Amari

Abstract: The mystery of belief propagation (BP) decoder, especially of the turbo decoding, is studied from information geometrical viewpoint. The loopy belief network (BN) of turbo codes makes it difficult to obtain the true “belief” by BP, and the characteristics of the algorithm and its equilibrium are not clearly understood. Our study gives an intuitive understanding of the mechanism, and a new framework for the analysis. Based on the framework, we reveal basic properties of the turbo decoding.

5 0.59217566 97 nips-2001-Information-Geometrical Significance of Sparsity in Gallager Codes

Author: Toshiyuki Tanaka, Shiro Ikeda, Shun-ichi Amari

Abstract: We report a result of perturbation analysis on decoding error of the belief propagation decoder for Gallager codes. The analysis is based on information geometry, and it shows that the principal term of decoding error at equilibrium comes from the m-embedding curvature of the log-linear submanifold spanned by the estimated pseudoposteriors, one for the full marginal, and K for partial posteriors, each of which takes a single check into account, where K is the number of checks in the Gallager code. It is then shown that the principal error term vanishes when the parity-check matrix of the code is so sparse that there are no two columns with overlap greater than 1. 1

6 0.39869756 132 nips-2001-Novel iteration schemes for the Cluster Variation Method

7 0.37761956 178 nips-2001-TAP Gibbs Free Energy, Belief Propagation and Sparsity

8 0.26123765 75 nips-2001-Fast, Large-Scale Transformation-Invariant Clustering

9 0.24877909 84 nips-2001-Global Coordination of Local Linear Models

10 0.22418155 101 nips-2001-K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms

11 0.20092101 182 nips-2001-The Fidelity of Local Ordinal Encoding

12 0.1933414 3 nips-2001-ACh, Uncertainty, and Cortical Inference

13 0.19256532 91 nips-2001-Improvisation and Learning

14 0.18707219 6 nips-2001-A Bayesian Network for Real-Time Musical Accompaniment

15 0.18519656 89 nips-2001-Grouping with Bias

16 0.18385646 153 nips-2001-Product Analysis: Learning to Model Observations as Products of Hidden Variables

17 0.18016934 191 nips-2001-Transform-invariant Image Decomposition with Similarity Templates

18 0.17371336 13 nips-2001-A Natural Policy Gradient

19 0.17085221 9 nips-2001-A Generalization of Principal Components Analysis to the Exponential Family

20 0.17008816 17 nips-2001-A Quantitative Model of Counterfactual Reasoning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.032), (17, 0.028), (19, 0.031), (27, 0.117), (30, 0.069), (38, 0.026), (49, 0.024), (54, 0.392), (59, 0.016), (72, 0.059), (79, 0.041), (83, 0.013), (91, 0.067)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81380355 196 nips-2001-Very loopy belief propagation for unwrapping phase images

Author: Brendan J. Frey, Ralf Koetter, Nemanja Petrovic

Abstract: Since the discovery that the best error-correcting decoding algorithm can be viewed as belief propagation in a cycle-bound graph, researchers have been trying to determine under what circumstances

2 0.62120324 169 nips-2001-Small-World Phenomena and the Dynamics of Information

Author: Jon M. Kleinberg

Abstract: unkown-abstract

3 0.3977752 29 nips-2001-Adaptive Sparseness Using Jeffreys Prior

Author: Mário Figueiredo

Abstract: In this paper we introduce a new sparseness inducing prior which does not involve any (hyper)parameters that need to be adjusted or estimated. Although other applications are possible, we focus here on supervised learning problems: regression and classification. Experiments with several publicly available benchmark data sets show that the proposed approach yields state-of-the-art performance. In particular, our method outperforms support vector machines and performs competitively with the best alternative techniques, both in terms of error rates and sparseness, although it involves no tuning or adjusting of sparsenesscontrolling hyper-parameters.

4 0.39671123 13 nips-2001-A Natural Policy Gradient

Author: Sham M. Kakade

Abstract: We provide a natural gradient method that represents the steepest descent direction based on the underlying structure of the parameter space. Although gradient methods cannot make large changes in the values of the parameters, we show that the natural gradient is moving toward choosing a greedy optimal action rather than just a better action. These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by Sutton et al. [9]. We then show drastic performance improvements in simple MDPs and in the more challenging MDP of Tetris. 1

5 0.39325243 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

6 0.39313906 92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines

7 0.39268175 9 nips-2001-A Generalization of Principal Components Analysis to the Exponential Family

8 0.39210561 60 nips-2001-Discriminative Direction for Kernel Classifiers

9 0.39052805 137 nips-2001-On the Convergence of Leveraging

10 0.39038992 8 nips-2001-A General Greedy Approximation Algorithm with Applications

11 0.38982338 157 nips-2001-Rates of Convergence of Performance Gradient Estimates Using Function Approximation and Bias in Reinforcement Learning

12 0.38830775 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

13 0.38795662 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

14 0.38760585 153 nips-2001-Product Analysis: Learning to Model Observations as Products of Hidden Variables

15 0.38754195 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

16 0.38731077 138 nips-2001-On the Generalization Ability of On-Line Learning Algorithms

17 0.38692492 190 nips-2001-Thin Junction Trees

18 0.38678718 88 nips-2001-Grouping and dimensionality reduction by locally linear embedding

19 0.38658717 103 nips-2001-Kernel Feature Spaces and Nonlinear Blind Souce Separation

20 0.38625437 63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine