nips nips2006 nips2006-120 knowledge-graph by maker-knowledge-mining

120 nips-2006-Learning to Traverse Image Manifolds


Source: pdf

Author: Piotr DollĂĄr, Vincent Rabaud, Serge J. Belongie

Abstract: We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data. Applications of our proposed technique include embedding with a natural out-of-sample extension and tasks such as tangent distance estimation, frame rate up-conversion, video compression and motion transfer. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. [sent-3, score-0.537]

2 Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data. [sent-4, score-0.478]

3 Applications of our proposed technique include embedding with a natural out-of-sample extension and tasks such as tangent distance estimation, frame rate up-conversion, video compression and motion transfer. [sent-5, score-0.668]

4 Unsupervised manifold learning refers to the problem of recovering the structure of a manifold from a set of unordered sample points. [sent-8, score-0.747]

5 Ĺš nd an embedding or ‘unrolling’ of the manifold into a lower dimensional space such that certain relationships between points are preserved. [sent-10, score-0.561]

6 Image manifolds have also been studied in the context of measuring distance between images undergoing known transformations. [sent-12, score-0.248]

7 For example, the tangent distance [20, 21] between two images is computed by generating local approximations of a manifold from known transformations and then computing the distance between these approximated manifolds. [sent-13, score-0.853]

8 In this work, we seek to frame the problem of recovering the structure of a manifold as that of directly learning the transformations a point on a manifold may undergo. [sent-14, score-0.839]

9 Our approach, Locally Smooth Manifold Learning (LSML), attempts to learn a warping function W with d degrees of freedom that can take any point on the manifold and generate its neighbors. [sent-15, score-0.537]

10 We show that LSML can recover the structure of the manifold where data is given, and also in regions where it is not, including regions beyond the support of the original data. [sent-18, score-0.471]

11 We propose a number of uses for the recovered warping function W, including embedding with a natural out-ofsample extension, and in the image domain discuss how it can be used for tasks such as computation of tangent distance, image sequence interpolation, compression, and motion transfer. [sent-19, score-0.818]

12 Finally, we show that by exploiting the manifold smoothness, LSML is robust under conditions where many embedding methods have difÄ? [sent-22, score-0.511]

13 Ĺš rst is the literature on manifold learning, which serves as the foundation for this work. [sent-29, score-0.347]

14 The second is work in computer vision and computer graphics addressing image warping and generative models for image formation. [sent-30, score-0.292]

15 Traditional methods for nonlinear manifolds include self organizing maps, principal curves, and variants of multi-dimensional scaling (MDS) among others, see [11] for a brief introduction to these techniques. [sent-34, score-0.157]

16 Ĺš eld has seen a number of interesting developments in nonlinear manifold learning. [sent-36, score-0.383]

17 A number of related embedding methods have also been introduced, representatives include LLE [17], I SOMAP [22], and more recently SDE [24]. [sent-38, score-0.164]

18 Ĺš ed as spectral embedding techniques [24]; the embeddings they compute are based on an eigenvector decomposition of an n Ä‚— n matrix that represents geometrical relationships of some form between the original n points. [sent-40, score-0.271]

19 [2] proposed a method to learn the tangent space of a manifold and demonstrated a preliminary illustration of rotating a small bitmap image by about 1â—Ĺš . [sent-47, score-0.668]

20 Ĺš c variation, the method reduces to computing a linear tangent subspace that models variability of each class. [sent-49, score-0.27]

21 The applications range from video texture synthesis [18] and facial expression extrapolation [8, 23] to face recognition [10] and video rewrite [7]. [sent-56, score-0.268]

22 3 Algorithm Let D be the dimension of the input space, and assume the data lies on a smooth d-dimensional manifold (d D). [sent-57, score-0.39]

23 Ĺš nd an embedding yi = M−1 (xi ) that preserves certain properties of the original data like the distances between all points (classical MDS) or the distances or angles between nearby points (e. [sent-60, score-0.303]

24 Instead, we seek to learn a warping function W that can take a point on the manifold and return any neighboring point on the manifold, capturing all the modes of variation of the data. [sent-63, score-0.568]

25 Let us use W(x, ) to denote the warping of x, with ∈ Rd acting on the degrees of freedom of the warp according to the formula M: W(x, ) = M(y + ), where y = M−1 (x). [sent-64, score-0.247]

26 Only data points xi sampled from one or several manifolds are given. [sent-73, score-0.285]

27 Twenty points (n=20) that lie on 1D curve (d=1) in a 2D space (D=2) are shown in (a). [sent-77, score-0.165]

28 H maps points in R2 to tangent vectors; in (b) tangent vectors computed over a regularly spaced grid are displayed, with original points (blue) and curve (gray) overlayed. [sent-80, score-0.842]

29 Ĺš xes this problem (c), the resulting tangents roughly align to the curve along its entirety. [sent-83, score-0.214]

30 We can traverse the manifold by taking small steps in the direction of the tangent; (d) shows two such paths, generated starting at the red plus and traversing outward in large steps (outer curve) and Ä? [sent-84, score-0.612]

31 This generates a coordinate system for the curve resulting in a 1D embedding shown in (e). [sent-86, score-0.276]

32 Ĺš ts each curve than training a separate H for each (if the structure of the two manifolds was very different this need not be the case). [sent-90, score-0.223]

33 To proceed, we assume that if xj is a neighbor of xi , there then exists an unknown ij such that W(xi , ij ) = xj to within a good approximation. [sent-92, score-0.161]

34 Ĺš nd a warping function that can transform a point into its neighbors. [sent-97, score-0.19]

35 Note, however, that the warping function has only d degrees of freedom while a point may have many more neighbors. [sent-98, score-0.19]

36 Let ∆i be the matrix where each column is of the form (xj − xi ) for each neighbor of xi . [sent-100, score-0.149]

37 Minimizing the above can be interpreted as searching for a warping function that directly explains the modes of variation at each point. [sent-103, score-0.221]

38 LSML used to recover the embedding of the S- curve under a number of sampling conditions. [sent-117, score-0.27]

39 In each plot we show the original points along with the computed embedding (rotated to align vertically), correspondence is indicated by coloring/shading (color was determined by the y- coordinate of the embedding). [sent-118, score-0.358]

40 In each case LSML was run with f = 8, d = 2, and neighbors computed by NN with = 1 (the height of the curve is 4). [sent-119, score-0.146]

41 The embeddings shown were recovered from data that was: (a) densely sampled (n=500) (b) sparsely sampled (n=100), (c) highly structured (n=190), and (d) noisy (n=500, random Gaussian noise with ÄŽƒ = . [sent-120, score-0.277]

42 Finally, nowhere in the construction have we enforced that the learned tangent vectors be orthogonal (such a constraint would only be appropriate if the manifold was isometric to a plane). [sent-129, score-0.703]

43 Ĺš t local variations of the data, but whose generalization abilities to other points on the manifold may be weaker. [sent-136, score-0.397]

44 LSML learns a function H from points in RD to tangent directions that agree, up to a linear combination, with estimated tangent directions at the original training points of the manifold. [sent-142, score-0.776]

45 By constraining H to be smooth (through use of a limited number of RBFs), we can compute tangents at points not seen during training, including points that may not lie on the underlying manifold. [sent-143, score-0.265]

46 Finally, given multiple non- overlapping manifolds with similar structure, we can train a single H to correctly predict the tangents of each, allowing information to be shared. [sent-145, score-0.199]

47 2 shows LSML successfully applied for recovering the embedding of the “S- curve� [sent-150, score-0.217]

48 After H is learned, the embedding is computed by choosing a random point on the manifold (a) (b) (c) (d) Figure 3: Reconstruction. [sent-153, score-0.541]

49 (a) Points sampled from the Swiss- roll manifold (middle), some recovered tangent vectors in a zoomedin region (left) and embedding found by LSML (right). [sent-155, score-0.954]

50 Reconstruction of Swiss- roll (b), created by a backprojection from regularly spaced grid points in the embedding (traversal was done from a single original point located at the base of the roll, see text for details). [sent-157, score-0.418]

51 Another reconstruction (c), this time using all points and extending the grid well beyond the support of the original data. [sent-158, score-0.212]

52 3) by traversing outward from topmost point, note reconstruction in regions with no points. [sent-161, score-0.285]

53 and establishing a coordinate system by traversing outward (the same procedure can be used to embed novel points, providing a natural out- of- sample extension). [sent-162, score-0.299]

54 a manifold as a preprocessing step for manifold learning algorithms; however a full comparison is outside the scope of this work. [sent-174, score-0.725]

55 Having learned H and computed an embedding, we can also backproject from a point y ∈ Rd to a point x on the manifold by Ä? [sent-175, score-0.49]

56 Ĺš nding the coordinate of the closest point yi in the original data, i then traversing from xi by j = yj − yj along each tangent direction j (see Fig. [sent-177, score-0.562]

57 3(a) shows tangents and an embedding recovered by LSML on the Swiss- roll. [sent-180, score-0.297]

58 3(b) we backproject from a grid of points in R2 ; by linking adjacent sets of points to form quadrilaterals we can display the resulting backprojected points as a surface. [sent-182, score-0.275]

59 3(c), we likewise do a backprojection (this time keeping all the original points), however we backproject grid points well below and above the support of the original data. [sent-184, score-0.258]

60 3(d) shows the reconstruction of a unit hemisphere by traversing outward from the topmost point. [sent-188, score-0.347]

61 In the latter case an embedding is not possible, however, we can still easily recover H for both (only hemisphere results are shown). [sent-190, score-0.261]

62 5 Results on Images Before continuing, we consider potential applications of H in the image domain, including tangent distance estimation, nonlinear interpolation, extrapolation, compression, and motion transfer. [sent-191, score-0.432]

63 Tangent distance estimation: H computes the tangent and can be used directly in invariant recognition schemes such as [21]. [sent-193, score-0.343]

64 3); of potential use in tasks such as frame rate up- conversion, reconstructing dropped frames and view synthesis. [sent-198, score-0.148]

65 1(f)), a recovered warp may be used on an entirely novel image. [sent-202, score-0.156]

66 These applications will depend not only on the accuracy of the learned H but also on how close a set of images is to a smooth manifold. [sent-203, score-0.187]

67 (d) Several transformations obtained after multiple steps along manifold for different linear combinations of Hθ (x). [sent-209, score-0.388]

68 The key insight to working with images is that although images can live in very high dimensional spaces (with D ≈ 106 quite common), we do not have to learn a transformation with that many parameters. [sent-212, score-0.224]

69 Let x be an image and H¡k (x), k ∈ [1, d], be the d tangent images. [sent-213, score-0.321]

70 The per patch assumption is not always suitable, most notably for transformations that are based only on image coordinate and are independent of appearance. [sent-217, score-0.19]

71 We rewrite each image xi ∈ RD as a s2 Ä‚— D matrix X i where each row contains pixels from one patch in xi (in training we sub-sample patches). [sent-220, score-0.288]

72 The learned ĂŽ˜ is a 2 Ä‚— s2 matrix, which can be visualized as two s Ä‚— s images as in Fig. [sent-226, score-0.144]

73 Here, we used a step size which gives roughly 10 interpolated frames between each pair of original frames. [sent-241, score-0.201]

74 With out-of-plane rotation, information must be created and the problem becomes ambiguous (multiple manifolds can intersect at a single point), hence generalization across images is not expected to be good. [sent-242, score-0.241]

75 6, results are shown on an eye manifold with 2 degrees of freedom. [sent-244, score-0.571]

76 6(d), we applied the transformation learned from one person’s eye to a single image of another person’s eye (taken under the same imaging conditions). [sent-249, score-0.6]

77 sampled and Figure truth images; dashed box contains 3out- 30 training rotation of a teapot 8(dataphysical[24], sub-The top row shows the learned transformation 400 and the central image. [sent-253, score-0.388]

78 the teapot, and comparing to ground truth data, we can observe the quality of the learned transformation on seen data (b) Bottom row showsboth ground from aimages; dashed The outmost Ä? [sent-256, score-0.251]

79 By observing the tip, handle and the two white blobs on the teapot, and comparing to ground truth data, we can observe the quality of the learned transformation on seen data (b) and unseen data (d), both starting from a single frame (c). [sent-261, score-0.297]

80 Here, we used a step size which gives Q  Q  roughly 10 interpolated frames between each pair of original frames. [sent-271, score-0.201]

81 With out- of- plane rotation, information Q  Q  must be created and the problem becomes ambiguous (multiple manifolds can intersect at a single point), Q  3  k Q 6 Q  hence generalization across images is not expected to be good. [sent-272, score-0.293]

82   Q Q   Q Q   Q  Q  Q  Q Q  Q  Q  (a) QQ (b) (c) (d)   Q Figure 1: Traversing the eye manifold. [sent-274, score-0.224]

83 Here d = 2, f = 600, s =Q and around 5000 patches were sampled; 2 frames were considered neighbors the eye manifold. [sent-278, score-0.416]

84 Ĺš ve different lines (3 vertical if they were adjacent LSML Q Figure images generated from the central image. [sent-281, score-0.156]

85  Figure 6: Traversing just outside the support of theQ on one (not shown), the outer 8 are extrapolated  training data and The inner 8 frames lie d = 2, f = 600, s = 19Q around 5000 patches were sampled; 2 frames were 2 horizontal). [sent-282, score-0.456]

86 Figure (b) details Hθ (x) for two images in a warping sequence: a linear combination can considered neighborsto move in different directions (e. [sent-284, score-0.312]

87 shown),Figure(d) shows how inner 8 frames lie just training data, i. [sent-290, score-0.172]

88 an eye the training data (not Finally, the outer 8 (d) extrapolated are (a) (b) (c) the its support. [sent-292, score-0.361]

89 Figure (b) one eye can be for two a novel eye warping sequence: a beyond eye manifold we learned ondetails Hθ (x) applied onimages in anot seen during training. [sent-293, score-1.358]

90 Figure (c) shows lead the6: Traversing the eyein differentLSML trained on one eye moving along Ä? [sent-298, score-0.224]

91 Finally,considered neighbors beyond the training data, around 5000 patches were sampled; 2 frames were Figure(d) shows how if they were adjacent in time. [sent-303, score-0.307]

92 The inner 8 frames lie just the eye the supportwe the training data eye can be applied on a novel eye not beyond its support. [sent-305, score-0.938]

93 Figure (b) details outside manifold of learned on one (not shown), the outer 8 are extrapolated seen during training. [sent-306, score-0.539]

94 Hθ (x) for two images in a warping sequence: a linear combination can lead the iris/eyelid to move in different directions (e. [sent-307, score-0.312]

95 Figure (c) shows extrapolation far beyond the training data, i. [sent-310, score-0.167]

96 Finally, Figure(d) shows how the eye manifold we learned on one eye can be applied on a novel transformations. [sent-313, score-0.894]

97 motion transfer was possible - Hθ trained on one series of images generalized eye not seen during Thus, to a different set of images. [sent-315, score-0.35]

98 Rather than pose manifold learning as the problem of recovering an embedding, we posed the problem in terms of learning a warping function for traversing the manifold. [sent-317, score-0.742]

99 Proposed uses of LSML include tangent distance estimation, frame rate up- conversion, video compression and motion transfer. [sent-319, score-0.504]

100 Transformation invariance in pattern recognitiontangent distance and tangent propagation. [sent-462, score-0.308]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('lsml', 0.563), ('manifold', 0.347), ('tangent', 0.27), ('eye', 0.224), ('warping', 0.19), ('embedding', 0.164), ('traversing', 0.152), ('lle', 0.129), ('manifolds', 0.121), ('somap', 0.117), ('frames', 0.097), ('images', 0.089), ('extrapolation', 0.086), ('rd', 0.085), ('rbfs', 0.084), ('tangents', 0.078), ('teapot', 0.078), ('curve', 0.071), ('embeddings', 0.068), ('roll', 0.064), ('outward', 0.062), ('hemisphere', 0.062), ('outer', 0.06), ('xi', 0.06), ('video', 0.059), ('backproject', 0.058), ('errorlin', 0.058), ('warp', 0.057), ('patch', 0.057), ('recovered', 0.055), ('learned', 0.055), ('sampled', 0.054), ('recovering', 0.053), ('plane', 0.052), ('rotation', 0.052), ('traverse', 0.051), ('image', 0.051), ('frame', 0.051), ('mds', 0.051), ('beyond', 0.05), ('points', 0.05), ('patches', 0.05), ('compression', 0.049), ('sparsely', 0.046), ('extrapolated', 0.046), ('transformation', 0.046), ('neighbors', 0.045), ('novel', 0.044), ('lie', 0.044), ('cf', 0.043), ('iris', 0.043), ('smooth', 0.043), ('nn', 0.041), ('coordinate', 0.041), ('vec', 0.041), ('nitesimal', 0.041), ('transformations', 0.041), ('reconstruction', 0.04), ('original', 0.039), ('unseen', 0.039), ('truth', 0.039), ('atlab', 0.039), ('backprojection', 0.039), ('outmost', 0.039), ('rabaud', 0.039), ('ucsd', 0.039), ('distance', 0.038), ('motion', 0.037), ('ground', 0.036), ('nonlinear', 0.036), ('ij', 0.036), ('recognition', 0.035), ('recover', 0.035), ('align', 0.034), ('adjacent', 0.034), ('doll', 0.034), ('epitomic', 0.034), ('interpolated', 0.034), ('directions', 0.033), ('grid', 0.033), ('central', 0.033), ('modes', 0.031), ('singular', 0.031), ('topmost', 0.031), ('intersect', 0.031), ('tip', 0.031), ('bernardo', 0.031), ('blobs', 0.031), ('isometric', 0.031), ('charting', 0.031), ('training', 0.031), ('roughly', 0.031), ('smoothness', 0.031), ('outside', 0.031), ('computed', 0.03), ('preserving', 0.03), ('smoothed', 0.029), ('rewrite', 0.029), ('neighbor', 0.029), ('regularly', 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 120 nips-2006-Learning to Traverse Image Manifolds

Author: Piotr DollĂĄr, Vincent Rabaud, Serge J. Belongie

Abstract: We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data. Applications of our proposed technique include embedding with a natural out-of-sample extension and tasks such as tangent distance estimation, frame rate up-conversion, video compression and motion transfer. 1

2 0.15813784 200 nips-2006-Unsupervised Regression with Applications to Nonlinear System Identification

Author: Ali Rahimi, Ben Recht

Abstract: We derive a cost functional for estimating the relationship between highdimensional observations and the low-dimensional process that generated them with no input-output examples. Limiting our search to invertible observation functions confers numerous benefits, including a compact representation and no suboptimal local minima. Our approximation algorithms for optimizing this cost functional are fast and give diagnostic bounds on the quality of their solution. Our method can be viewed as a manifold learning algorithm that utilizes a prior on the low-dimensional manifold coordinates. The benefits of taking advantage of such priors in manifold learning and searching for the inverse observation functions in system identification are demonstrated empirically by learning to track moving targets from raw measurements in a sensor network setting and in an RFID tracking experiment. 1

3 0.14615399 127 nips-2006-MLLE: Modified Locally Linear Embedding Using Multiple Weights

Author: Zhenyue Zhang, Jing Wang

Abstract: The locally linear embedding (LLE) is improved by introducing multiple linearly independent local weight vectors for each neighborhood. We characterize the reconstruction weights and show the existence of the linearly independent weight vectors at each neighborhood. The modified locally linear embedding (MLLE) proposed in this paper is much stable. It can retrieve the ideal embedding if MLLE is applied on data points sampled from an isometric manifold. MLLE is also compared with the local tangent space alignment (LTSA). Numerical examples are given that show the improvement and efficiency of MLLE. 1

4 0.13960114 128 nips-2006-Manifold Denoising

Author: Matthias Hein, Markus Maier

Abstract: We consider the problem of denoising a noisily sampled submanifold M in Rd , where the submanifold M is a priori unknown and we are only given a noisy point sample. The presented denoising algorithm is based on a graph-based diffusion process of the point sample. We analyze this diffusion process using recent results about the convergence of graph Laplacians. In the experiments we show that our method is capable of dealing with non-trivial high-dimensional noise. Moreover using the denoising algorithm as pre-processing method we can improve the results of a semi-supervised learning algorithm. 1

5 0.12902415 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

Author: Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, Bernhard Schölkopf

Abstract: This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1

6 0.12425859 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

7 0.10857511 60 nips-2006-Convergence of Laplacian Eigenmaps

8 0.092381567 104 nips-2006-Large-Scale Sparsified Manifold Regularization

9 0.090265326 111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations

10 0.081229188 151 nips-2006-On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts

11 0.07876531 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

12 0.075671613 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

13 0.072564542 134 nips-2006-Modeling Human Motion Using Binary Latent Variables

14 0.063407615 110 nips-2006-Learning Dense 3D Correspondence

15 0.06327416 153 nips-2006-Online Clustering of Moving Hyperplanes

16 0.062655799 66 nips-2006-Detecting Humans via Their Pose

17 0.060978148 42 nips-2006-Bayesian Image Super-resolution, Continued

18 0.054900326 130 nips-2006-Max-margin classification of incomplete data

19 0.052826144 105 nips-2006-Large Margin Component Analysis

20 0.052803636 78 nips-2006-Fast Discriminative Visual Codebooks using Randomized Clustering Forests


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.2), (1, 0.049), (2, 0.086), (3, 0.066), (4, 0.028), (5, -0.097), (6, -0.08), (7, -0.067), (8, -0.004), (9, 0.106), (10, 0.106), (11, -0.135), (12, 0.02), (13, 0.035), (14, 0.061), (15, 0.015), (16, -0.064), (17, -0.115), (18, -0.121), (19, 0.061), (20, 0.007), (21, -0.1), (22, 0.04), (23, -0.134), (24, -0.16), (25, 0.015), (26, -0.109), (27, -0.107), (28, -0.185), (29, -0.175), (30, 0.109), (31, 0.112), (32, 0.052), (33, 0.026), (34, 0.029), (35, -0.128), (36, -0.008), (37, -0.017), (38, 0.075), (39, -0.033), (40, 0.107), (41, -0.049), (42, -0.075), (43, -0.093), (44, 0.042), (45, -0.045), (46, 0.128), (47, 0.023), (48, 0.024), (49, 0.127)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93580961 120 nips-2006-Learning to Traverse Image Manifolds

Author: Piotr DollĂĄr, Vincent Rabaud, Serge J. Belongie

Abstract: We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data. Applications of our proposed technique include embedding with a natural out-of-sample extension and tasks such as tangent distance estimation, frame rate up-conversion, video compression and motion transfer. 1

2 0.61144125 128 nips-2006-Manifold Denoising

Author: Matthias Hein, Markus Maier

Abstract: We consider the problem of denoising a noisily sampled submanifold M in Rd , where the submanifold M is a priori unknown and we are only given a noisy point sample. The presented denoising algorithm is based on a graph-based diffusion process of the point sample. We analyze this diffusion process using recent results about the convergence of graph Laplacians. In the experiments we show that our method is capable of dealing with non-trivial high-dimensional noise. Moreover using the denoising algorithm as pre-processing method we can improve the results of a semi-supervised learning algorithm. 1

3 0.60261816 200 nips-2006-Unsupervised Regression with Applications to Nonlinear System Identification

Author: Ali Rahimi, Ben Recht

Abstract: We derive a cost functional for estimating the relationship between highdimensional observations and the low-dimensional process that generated them with no input-output examples. Limiting our search to invertible observation functions confers numerous benefits, including a compact representation and no suboptimal local minima. Our approximation algorithms for optimizing this cost functional are fast and give diagnostic bounds on the quality of their solution. Our method can be viewed as a manifold learning algorithm that utilizes a prior on the low-dimensional manifold coordinates. The benefits of taking advantage of such priors in manifold learning and searching for the inverse observation functions in system identification are demonstrated empirically by learning to track moving targets from raw measurements in a sensor network setting and in an RFID tracking experiment. 1

4 0.55315268 127 nips-2006-MLLE: Modified Locally Linear Embedding Using Multiple Weights

Author: Zhenyue Zhang, Jing Wang

Abstract: The locally linear embedding (LLE) is improved by introducing multiple linearly independent local weight vectors for each neighborhood. We characterize the reconstruction weights and show the existence of the linearly independent weight vectors at each neighborhood. The modified locally linear embedding (MLLE) proposed in this paper is much stable. It can retrieve the ideal embedding if MLLE is applied on data points sampled from an isometric manifold. MLLE is also compared with the local tangent space alignment (LTSA). Numerical examples are given that show the improvement and efficiency of MLLE. 1

5 0.41910246 105 nips-2006-Large Margin Component Analysis

Author: Lorenzo Torresani, Kuang-chih Lee

Abstract: Metric learning has been shown to significantly improve the accuracy of k-nearest neighbor (kNN) classification. In problems involving thousands of features, distance learning algorithms cannot be used due to overfitting and high computational complexity. In such cases, previous work has relied on a two-step solution: first apply dimensionality reduction methods to the data, and then learn a metric in the resulting low-dimensional subspace. In this paper we show that better classification performance can be achieved by unifying the objectives of dimensionality reduction and metric learning. We propose a method that solves for the low-dimensional projection of the inputs, which minimizes a metric objective aimed at separating points in different classes by a large margin. This projection is defined by a significantly smaller number of parameters than metrics learned in input space, and thus our optimization reduces the risks of overfitting. Theory and results are presented for both a linear as well as a kernelized version of the algorithm. Overall, we achieve classification rates similar, and in several cases superior, to those of support vector machines. 1

6 0.41570112 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

7 0.4128685 104 nips-2006-Large-Scale Sparsified Manifold Regularization

8 0.41283742 60 nips-2006-Convergence of Laplacian Eigenmaps

9 0.4061124 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

10 0.343858 94 nips-2006-Image Retrieval and Classification Using Local Distance Functions

11 0.33051845 129 nips-2006-Map-Reduce for Machine Learning on Multicore

12 0.32340705 182 nips-2006-Statistical Modeling of Images with Fields of Gaussian Scale Mixtures

13 0.31367028 25 nips-2006-An Application of Reinforcement Learning to Aerobatic Helicopter Flight

14 0.31324646 121 nips-2006-Learning to be Bayesian without Supervision

15 0.30772123 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency

16 0.29474366 52 nips-2006-Clustering appearance and shape by learning jigsaws

17 0.29416689 111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations

18 0.28108141 4 nips-2006-A Humanlike Predictor of Facial Attractiveness

19 0.27784574 153 nips-2006-Online Clustering of Moving Hyperplanes

20 0.26988575 97 nips-2006-Inducing Metric Violations in Human Similarity Judgements


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.08), (3, 0.455), (7, 0.075), (9, 0.05), (20, 0.018), (22, 0.034), (44, 0.042), (57, 0.069), (65, 0.035), (69, 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.92700726 200 nips-2006-Unsupervised Regression with Applications to Nonlinear System Identification

Author: Ali Rahimi, Ben Recht

Abstract: We derive a cost functional for estimating the relationship between highdimensional observations and the low-dimensional process that generated them with no input-output examples. Limiting our search to invertible observation functions confers numerous benefits, including a compact representation and no suboptimal local minima. Our approximation algorithms for optimizing this cost functional are fast and give diagnostic bounds on the quality of their solution. Our method can be viewed as a manifold learning algorithm that utilizes a prior on the low-dimensional manifold coordinates. The benefits of taking advantage of such priors in manifold learning and searching for the inverse observation functions in system identification are demonstrated empirically by learning to track moving targets from raw measurements in a sensor network setting and in an RFID tracking experiment. 1

same-paper 2 0.90934944 120 nips-2006-Learning to Traverse Image Manifolds

Author: Piotr DollĂĄr, Vincent Rabaud, Serge J. Belongie

Abstract: We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data. Applications of our proposed technique include embedding with a natural out-of-sample extension and tasks such as tangent distance estimation, frame rate up-conversion, video compression and motion transfer. 1

3 0.87407535 17 nips-2006-A recipe for optimizing a time-histogram

Author: Hideaki Shimazaki, Shigeru Shinomoto

Abstract: The time-histogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time-histogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a time-histogram from the spike data, so that the time-histogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1

4 0.79813331 104 nips-2006-Large-Scale Sparsified Manifold Regularization

Author: Ivor W. Tsang, James T. Kwok

Abstract: Semi-supervised learning is more powerful than supervised learning by using both labeled and unlabeled data. In particular, the manifold regularization framework, together with kernel methods, leads to the Laplacian SVM (LapSVM) that has demonstrated state-of-the-art performance. However, the LapSVM solution typically involves kernel expansions of all the labeled and unlabeled examples, and is slow on testing. Moreover, existing semi-supervised learning methods, including the LapSVM, can only handle a small number of unlabeled examples. In this paper, we integrate manifold regularization with the core vector machine, which has been used for large-scale supervised and unsupervised learning. By using a sparsified manifold regularizer and formulating as a center-constrained minimum enclosing ball problem, the proposed method produces sparse solutions with low time and space complexities. Experimental results show that it is much faster than the LapSVM, and can handle a million unlabeled examples on a standard PC; while the LapSVM can only handle several thousand patterns. 1

5 0.52494377 127 nips-2006-MLLE: Modified Locally Linear Embedding Using Multiple Weights

Author: Zhenyue Zhang, Jing Wang

Abstract: The locally linear embedding (LLE) is improved by introducing multiple linearly independent local weight vectors for each neighborhood. We characterize the reconstruction weights and show the existence of the linearly independent weight vectors at each neighborhood. The modified locally linear embedding (MLLE) proposed in this paper is much stable. It can retrieve the ideal embedding if MLLE is applied on data points sampled from an isometric manifold. MLLE is also compared with the local tangent space alignment (LTSA). Numerical examples are given that show the improvement and efficiency of MLLE. 1

6 0.51242638 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

7 0.49834466 121 nips-2006-Learning to be Bayesian without Supervision

8 0.49635276 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

9 0.49332947 128 nips-2006-Manifold Denoising

10 0.4860689 153 nips-2006-Online Clustering of Moving Hyperplanes

11 0.48564106 106 nips-2006-Large Margin Hidden Markov Models for Automatic Speech Recognition

12 0.47837174 15 nips-2006-A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo

13 0.47117397 67 nips-2006-Differential Entropic Clustering of Multivariate Gaussians

14 0.46502528 48 nips-2006-Branch and Bound for Semi-Supervised Support Vector Machines

15 0.46399322 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis

16 0.45704696 95 nips-2006-Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions

17 0.45041502 42 nips-2006-Bayesian Image Super-resolution, Continued

18 0.45020127 34 nips-2006-Approximate Correspondences in High Dimensions

19 0.45005578 110 nips-2006-Learning Dense 3D Correspondence

20 0.44996005 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron