nips nips2008 nips2008-151 knowledge-graph by maker-knowledge-mining

151 nips-2008-Non-parametric Regression Between Manifolds


Source: pdf

Author: Florian Steinke, Matthias Hein

Abstract: This paper discusses non-parametric regression between Riemannian manifolds. This learning problem arises frequently in many application areas ranging from signal processing, computer vision, over robotics to computer graphics. We present a new algorithmic scheme for the solution of this general learning problem based on regularized empirical risk minimization. The regularization functional takes into account the geometry of input and output manifold, and we show that it implements a prior which is particularly natural. Moreover, we demonstrate that our algorithm performs well in a difficult surface registration problem. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The regularization functional takes into account the geometry of input and output manifold, and we show that it implements a prior which is particularly natural. [sent-8, score-0.176]

2 Moreover, we demonstrate that our algorithm performs well in a difficult surface registration problem. [sent-9, score-0.17]

3 1 Introduction In machine learning, manifold structure has so far been mainly used in manifold learning [1], to enhance learning methods especially in semi-supervised learning. [sent-10, score-0.608]

4 Namely, we want to predict a mapping between known Riemannian manifolds based on input/output example pairs. [sent-12, score-0.279]

5 In the statistics literature [2], this problem is treated for certain special output manifolds in directional statistics, where the main applications are to predict angles (circle), directions (sphere) or orientations (set of orthogonal matrices). [sent-13, score-0.282]

6 More complex manifolds appear naturally in signal processing [3, 4], computer graphics [5, 6], and robotics [7]. [sent-14, score-0.275]

7 Impressive results in shape processing have recently been obtained [8, 9] by imposing a Riemannian metric on the set of shapes, so that shape interpolation is reduced to the estimation of a smooth curve in the manifold of all shapes. [sent-15, score-0.435]

8 The problem of learning, when input and output domain are Riemannian manifolds, is quite distinct from standard multivariate regression or manifold learning. [sent-17, score-0.48]

9 While this approach still works for manifold-valued input, it is no longer feasible if the output space is a manifold, as general Riemannian manifolds do not allow an addition operation, see Figure 1 for an illustration. [sent-20, score-0.306]

10 One way how one can learn manifold-valued mappings using standard regression techniques is to learn mappings directly into charts of the manifold. [sent-21, score-0.322]

11 Another approach is to use an embedding of the manifold in Euclidean space where one can use standard multivariate regression and then project the learned mapping onto the manifold. [sent-23, score-0.519]

12 Even if the original mapping in Euclidean space is smooth, its projection onto the manifold might be discontinuous. [sent-25, score-0.47]

13 Averaging of the green points which are close with respect to the geodesic distance is still reasonable. [sent-29, score-0.283]

14 However, the blue points which are close with respect to the Euclidean distance are not necessarily close in geodesic distance and therefore averaging can fail. [sent-30, score-0.334]

15 Here, we provide a theoretical analysis of the preferred mappings of the employed regularization functional, and we show that these can be seen as natural generalizations of linear mappings in Euclidean space to the manifold-valued case. [sent-32, score-0.329]

16 It will become evident that this property makes the regularizer particularly suited as a prior for learning mappings between manifolds. [sent-33, score-0.187]

17 In our implementation, the manifolds can be either given analytically or as point clouds in Euclidean space, rendering our approach applicable for almost any manifold-valued regression problem. [sent-35, score-0.304]

18 In the experimental section we demonstrate good performance in a surface registration task, where both input manifold and output manifold are non-Euclidean – a task which could not be solved previously in [6]. [sent-36, score-0.863]

19 Finally, in Section 4, we describe the new algorithm for learning mappings between Riemannian manifolds, and provide performance results for a toy problem and a surface registration task in Section 5. [sent-38, score-0.3]

20 2 Regularized empirical risk minimization for manifold-valued regression Suppose we are given two Riemannian manifolds, the input manifold M of dimension m and the target manifold N of dimension n. [sent-39, score-0.745]

21 Since most Riemannian manifolds are given in this form anyway – think of the sphere or the set of orthogonal matrices, this is only a minor restriction. [sent-41, score-0.361]

22 This learning problem reduces to standard multivariate regression if M and N are both Euclidean spaces Rm and Rn , and to regression on a manifold if at least N is Euclidean. [sent-43, score-0.457]

23 A quite direct generalization to a loss function on a Riemannian manifold N is to use the squared geodesic distance in N , L(Yi , Ψ(Xi )) = d2 (Yi , Ψ(Xi )). [sent-46, score-0.583]

24 The correspondence to the multivariate case can be seen N from the fact that dN (Yi , Ψ(Xi )) is the length of the shortest path between Yi and Ψ(Xi ) in N , as f (Xi ) − Yi is the length of the shortest path, namely the length of the straight line, between f (Xi ) and Yi in Rn . [sent-47, score-0.165]

25 Regularizer: The regularization functional should measure the smoothness of the mapping Ψ. [sent-48, score-0.168]

26 We use the so-called Eells energy introduced in [6] as our smoothness functional which, as we will show in the next section, implements a particularly well-suited prior over mappings for many applications. [sent-49, score-0.419]

27 Let xα be Cartesian coordinates in M and let Ψ(x) be given in Cartesian coordinates 2 in Rt then the Eells energy can be written as, t m ∂ 2 Ψµ ∂xα ∂xβ SEells (Ψ) = M ⊆Rm µ=1 α,β=1 2 dx, (2) where denotes the projection onto the tangent space TΨ(x) N of the target manifold at Ψ(x). [sent-53, score-0.94]

28 Note, that the Eells energy reduces to the well-known thin-plate spline energy if also the target manifold N is Euclidean, that is, N = Rn . [sent-54, score-0.787]

29 (3) The apparently small step of the projection onto the tangent space leads to huge qualitative differences in the behavior of both energies. [sent-56, score-0.249]

30 In particular, the Eells energy penalizes only the second derivative along the manifold, whereas changes in the normal direction are discarded. [sent-57, score-0.365]

31 In this case the Eells energy penalizes only the acceleration along the curve (the change of the curve in tangent direction) whereas the thin-plate spline energy penalizes also the normal part which just measures the curvature of the curve in the ambient space. [sent-59, score-1.034]

32 The input manifold is R and the output manifold N is a one-dimensional curve embedded in R2 , i. [sent-61, score-0.76]

33 If the images Ψ(xi ) of equidistant points xi in the input manifold M = R are also equidistant on the output manifold, then Ψ has no acceleration in terms of N , i. [sent-64, score-0.55]

34 Since the manifold is curved, also the graph of Ψ has to bend to stay on N . [sent-68, score-0.304]

35 The Eells energy only penalizes the intrinsic acceleration, that is, only the component parallel to the tangent space at Ψ(xi ), the green arrow. [sent-69, score-0.435]

36 3 Advantages and properties of the Eells energy In the last section we motivated that the Eells energy penalizes only changes along the manifold. [sent-70, score-0.471]

37 This property and the fact that the Eells energy is independent of the parametrization of M and N , can be directly seen from the covariant formulation in the following section. [sent-71, score-0.262]

38 We briefly review the derivation of the Eells energy derivation in [10], which we need in order to discuss properties of the Eells energy and the extension to manifold-valued input. [sent-72, score-0.406]

39 1 The general Eells energy Let xα and y µ be coordinates on M and N . [sent-75, score-0.28]

40 In order to get a second covariant derivative of φ, we apply the covariant ∂ ∂ derivative M of M . [sent-78, score-0.312]

41 , the direction of differentiation ∂y µ = ∂xα ∂x ∂ ∂xα N ∈ Tx M is first mapped to Tφ(x) N using the differential dφ, and then the covariant derivative of N is used. [sent-82, score-0.19]

42 The Eells energy penalizes the squared norm of this second derivative tensor, corresponding to the Frobenius norm of the Hessian in Euclidean space, SEells (φ) = dφ M 2 ∗ ∗ Tx M ⊗Tx M ⊗Tφ(x) N dV (x). [sent-85, score-0.365]

43 In this tensorial form the energy is parametrization independent and, since it depends only on intrinsic properties, it measures smoothness of φ only with respect to the geometric properties of M and N . [sent-86, score-0.297]

44 Then we show in [10] that dφ simplifies to dφ = ∂ 2 Ψµ ∂ ∂Ψµ M γ − Γβα dxβ ⊗ dxα ⊗ µ ∂xβ ∂xα ∂xγ ∂z , (5) where is the orthogonal projection onto the tangent space of N and z µ are Cartesian coordinates in Rt . [sent-89, score-0.326]

45 Note, that if M is Euclidean, the Christoffel symbols M Γ are zero and the Eells energy reduces to Equation (2) discussed in the previous section. [sent-90, score-0.25]

46 This form of the Eells energy was also used in our previous implementation in [6] which could therefore not deal with non-Euclidean input manifolds. [sent-91, score-0.248]

47 3 we discuss how to compute dφ and thus the Eells energy for this general case. [sent-94, score-0.203]

48 2 The null space of the Eells energy and the generalization of linear mappings The null space of a regularization functional S(φ) is the set {φ | S(φ) = 0}. [sent-96, score-0.598]

49 Thus, the null space is the set of mappings which we are free to fit the data with – only deviations from the null space are penalized. [sent-98, score-0.304]

50 In standard regression, depending on the order of the differential used for regularization, the null space often consists out of linear maps or polynomials of small degree. [sent-99, score-0.169]

51 We have shown in the last section, that the Eells energy reduces to the classical thin-plate spline energy, if input and output manifold are Euclidean. [sent-100, score-0.639]

52 For the thin-plate spline energy it is wellknown that the null space consists out of linear maps between input and output space. [sent-101, score-0.47]

53 However, the concept of linearity breaks down if the input and output spaces are Riemannian manifolds, since manifolds have no linear structure. [sent-102, score-0.327]

54 A key observation towards a natural generalization of the concept of linearity to the manifold setting is that linear maps map straight lines to straight lines. [sent-103, score-0.428]

55 Now, a straight line between two points in Euclidean space corresponds to a curve with no acceleration in a Riemannian manifold, that is, a geodesic between the two points. [sent-104, score-0.403]

56 In analogy to the Euclidean case we therefore consider mappings which map geodesics to geodesics as the proper generalization of linear maps for Riemannian manifolds. [sent-105, score-0.378]

57 The following proposition taken from [11] defines this concept and shows that the set of generalized linear maps is exactly the null space of the Eells energy. [sent-106, score-0.175]

58 Proposition 1 [11] A map φ : M → N is totally geodesic, if φ maps geodesics of M linearly to geodesics of N , i. [sent-107, score-0.302]

59 the image of any geodesic in M is also a geodesic in N , though potentially with a different constant speed. [sent-109, score-0.41]

60 We have, φ is totally geodesic if and only if dφ = 0. [sent-110, score-0.259]

61 This is the simplest relation a non-trivial mapping can encode between input and output, and totally geodesic mappings encode the same “linear” relationship even though the input and output manifold are nonlinear. [sent-112, score-0.86]

62 However, note that like linear maps, totally geodesic maps are not necessarily distortion-free, but every distortion-free (isometric) mapping is totally geodesic. [sent-113, score-0.398]

63 With this restriction in mind, one can see the Eells energy also as a measure of distortion of the mapping φ. [sent-118, score-0.24]

64 This makes the Eells energy an interesting candidate for a variety of geometric fitting problems, e. [sent-119, score-0.234]

65 , for surface registration as demonstrated in the experimental section. [sent-121, score-0.17]

66 3 Computation of the Eells energy for general input manifolds In order to compute the Eells energy for general input manifolds, we need to be able to evaluate the second derivative in Equation (5), in particular, the Christoffel symbols of the input manifold M . [sent-123, score-1.231]

67 , xm be the coordinates associated with an orthonormal basis of the tangent space at Tp M , that is p has coordinates x = 0. [sent-131, score-0.298]

68 Then in Cartesian coordinates z of Rs centered at p and aligned with the tangent space Tp M , the manifold can be approximated up to second order as z(x) = (x1 , . [sent-132, score-0.525]

69 Note, that the principal curvature, also called extrinsic curvature, quantifies how much the input manifold bends with respect to the ambient space. [sent-137, score-0.43]

70 The principal curvatures can be computed directly for manifolds in analytic form and approximated for point cloud data using standard techniques, see Section 4. [sent-138, score-0.378]

71 Expression (6) allows us to compute the Eells energy in the case of manifoldvalued input. [sent-146, score-0.203]

72 We just have to replace the second-partial derivative in the Eells energy in (2) by this manifold input-adapted formulation, which can be computed easily. [sent-147, score-0.604]

73 We choose the K local polynomial centers ci approximately uniformly distributed over M , thereby adapting the function class to the shape of the input manifold M . [sent-155, score-0.503]

74 We compute the energy integral (2) as a function of w, by summing up the energy density over an approximately uniform discretisation of M . [sent-157, score-0.406]

75 The projection onto the tangent space, used in (2) and (5), and the second order approximation for computing intrinsic second derivatives, used in (5) and (6), are manifold specific and are explained below. [sent-158, score-0.552]

76 We also express the squared geodesic distance used as loss function in terms of w, see below, and thus end up with a finite dimensional optimisation problem in w which we solve using Newton’s method with line search. [sent-159, score-0.304]

77 The projection of the second derivative of Ψ onto the tangent space for Ψ(x) ∈ N , as required in (2) or (5), is computed using the iso-distance manifolds NΨ(x) = {y ∈ Rt |d(y, N ) = d(Ψ(x), N )} of N . [sent-169, score-0.588]

78 These two constructions are sensible, since as Ψ approaches the manifold N for increasing γ, both approximations converge to the desired operations on the manifold N . [sent-171, score-0.608]

79 Manifold Operations: For each output manifold N , we need to compute projections onto the tangent spaces of N and its iso-distance manifolds, the closest point to p ∈ Rt on N , and geodesic distances on N . [sent-172, score-0.811]

80 Using a signed distance function η, projections P onto the tangent spaces of N or −2 its iso-distance manifolds at p ∈ Rt are given as P = 1 − η(p) η(p) η(p)T . [sent-173, score-0.539]

81 Finding the closest point to p ∈ Rt in St−1 is trivial and the geodesic distance is dSt−1 (x, y) = arccos x, y for x, y ∈ St−1 . [sent-175, score-0.283]

82 For the surface registration task, the manifold is given as a densely sampled point cloud with surface normals. [sent-176, score-0.651]

83 Since in the surface registration problem we used rather large weights for the loss, Ψ(Xi ) and Yi were always close on the surface. [sent-180, score-0.17]

84 In this case the geodesic distance can be well approximated by the Euclidean one, so that for performance reasons we used the Euclidean distance. [sent-181, score-0.256]

85 An exact, but more expensive method to compute geodesics is to minimize the harmonic energy of curves, see [6]. [sent-182, score-0.303]

86 For non-Euclidean input manifolds M , we similarly compute local second order approximations of M in Rs to estimate the principal curvatures needed for the second derivative of Ψ in (6). [sent-183, score-0.5]

87 Line on Sphere: Consider regression from [0, 1] to the sphere S2 ⊆ R3 . [sent-186, score-0.181]

88 The blue dots show the estimated curve for our Eells-regularized approach, the green dots depict thin-plate splines (TPS) in R3 radially projected onto the sphere, and the red dots show results for the local approach of [8]. [sent-193, score-0.349]

89 We compare our framework for nonparametric regression between manifolds with standard cubic smoothing splines in R3 – the equivalent of thin-plate splines (TPS) for one input dimension – projected radially on the sphere, and with the local manifold-valued Nadaraya-Watson estimator of [8]. [sent-203, score-0.559]

90 Clearly, as the curve is very densely sampled for high k, both approaches perform similar, since the problem becomes essentially local, and locally all manifolds are Euclidean. [sent-209, score-0.334]

91 Using our manifold-adapted approach we avoid distorting projections and use the true manifold distances in the loss and the regularizer. [sent-212, score-0.379]

92 A dense correspondence map, that is, an assignment of all points of one head to the anatomically equivalent points on the other head, allows one to perform morphing [12] or to build linear object models [13] which are flexible tools for computer graphics as well as computer vision. [sent-215, score-0.266]

93 Most approaches minimize a functional consisting of a local similarity measure and a smoothness functional or regularizer for the overall mapping. [sent-217, score-0.219]

94 Motivated by the fact that the Eells energy favors simple “linear” mappings, we propose to use it as regularizer for correspondence maps. [sent-218, score-0.358]

95 Here, we have used a subset of the head database of [13] and considered their correspondence as ground-truth. [sent-221, score-0.179]

96 We took the average head of one part of the database and registered it to the other 10 faces, using the mean distance to the correspondence of [13] as error score. [sent-223, score-0.23]

97 46) Figure 3: Correspondence computation from the original head in a) to the target head in c) with 55 markers (yellow crosses). [sent-232, score-0.232]

98 Distance of the computed correspondence to the correspondence of [13] is color-coded in d) - f) for different methods. [sent-234, score-0.196]

99 The TPS method represents the initial solution of our approach, that is, a mapping into R3 minimizing the TPS energy (3), which is then projected onto the target manifold. [sent-237, score-0.36]

100 [12] use a volume-deformation based approach that directly finds smooth mappings from surface to surface, without the need of projection, but their regularizer does not take into account the true distances along the surface. [sent-238, score-0.312]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('eells', 0.586), ('manifold', 0.304), ('tps', 0.268), ('manifolds', 0.242), ('geodesic', 0.205), ('energy', 0.203), ('riemannian', 0.174), ('mappings', 0.13), ('tangent', 0.12), ('sphere', 0.119), ('euclidean', 0.107), ('rt', 0.103), ('surface', 0.102), ('geodesics', 0.1), ('correspondence', 0.098), ('derivative', 0.097), ('rs', 0.092), ('christoffel', 0.084), ('head', 0.081), ('coordinates', 0.077), ('registration', 0.068), ('steinke', 0.067), ('curve', 0.067), ('penalizes', 0.065), ('onto', 0.063), ('null', 0.063), ('splines', 0.063), ('regression', 0.062), ('covariant', 0.059), ('regularizer', 0.057), ('totally', 0.054), ('distance', 0.051), ('cloud', 0.05), ('curvatures', 0.05), ('dx', 0.048), ('maps', 0.048), ('spline', 0.047), ('symbols', 0.047), ('functional', 0.046), ('ambient', 0.045), ('yi', 0.045), ('regularization', 0.045), ('input', 0.045), ('xi', 0.044), ('curvature', 0.043), ('projection', 0.042), ('acceleration', 0.042), ('tx', 0.042), ('proposition', 0.04), ('smoothness', 0.04), ('cartesian', 0.04), ('markers', 0.04), ('heads', 0.04), ('output', 0.04), ('hessian', 0.038), ('straight', 0.038), ('hein', 0.038), ('tp', 0.038), ('isometric', 0.038), ('mapping', 0.037), ('principal', 0.036), ('differential', 0.034), ('correspondences', 0.034), ('signed', 0.034), ('seells', 0.033), ('tog', 0.033), ('rm', 0.033), ('centers', 0.033), ('graphics', 0.033), ('shape', 0.032), ('polynomial', 0.032), ('newton', 0.031), ('dn', 0.031), ('geometric', 0.031), ('mm', 0.03), ('target', 0.03), ('local', 0.03), ('isometrically', 0.029), ('mises', 0.029), ('forehead', 0.029), ('interior', 0.029), ('projections', 0.029), ('multivariate', 0.029), ('points', 0.027), ('projected', 0.027), ('radially', 0.027), ('ci', 0.027), ('closest', 0.027), ('rn', 0.026), ('optimisation', 0.025), ('isometry', 0.025), ('densely', 0.025), ('submanifold', 0.025), ('regularized', 0.025), ('dots', 0.024), ('space', 0.024), ('equidistant', 0.024), ('distances', 0.023), ('intrinsic', 0.023), ('loss', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 151 nips-2008-Non-parametric Regression Between Manifolds

Author: Florian Steinke, Matthias Hein

Abstract: This paper discusses non-parametric regression between Riemannian manifolds. This learning problem arises frequently in many application areas ranging from signal processing, computer vision, over robotics to computer graphics. We present a new algorithmic scheme for the solution of this general learning problem based on regularized empirical risk minimization. The regularization functional takes into account the geometry of input and output manifold, and we show that it implements a prior which is particularly natural. Moreover, we demonstrate that our algorithm performs well in a difficult surface registration problem. 1

2 0.21074036 60 nips-2008-Designing neurophysiology experiments to optimally constrain receptive field models along parametric submanifolds

Author: Jeremy Lewi, Robert Butera, David M. Schneider, Sarah Woolley, Liam Paninski

Abstract: Sequential optimal design methods hold great promise for improving the efficiency of neurophysiology experiments. However, previous methods for optimal experimental design have incorporated only weak prior information about the underlying neural system (e.g., the sparseness or smoothness of the receptive field). Here we describe how to use stronger prior information, in the form of parametric models of the receptive field, in order to construct optimal stimuli and further improve the efficiency of our experiments. For example, if we believe that the receptive field is well-approximated by a Gabor function, then our method constructs stimuli that optimally constrain the Gabor parameters (orientation, spatial frequency, etc.) using as few experimental trials as possible. More generally, we may believe a priori that the receptive field lies near a known sub-manifold of the full parameter space; in this case, our method chooses stimuli in order to reduce the uncertainty along the tangent space of this sub-manifold as rapidly as possible. Applications to simulated and real data indicate that these methods may in many cases improve the experimental efficiency. 1

3 0.11563656 51 nips-2008-Convergence and Rate of Convergence of a Manifold-Based Dimension Reduction Algorithm

Author: Andrew Smith, Hongyuan Zha, Xiao-ming Wu

Abstract: We study the convergence and the rate of convergence of a local manifold learning algorithm: LTSA [13]. The main technical tool is the perturbation analysis on the linear invariant subspace that corresponds to the solution of LTSA. We derive a worst-case upper bound of errors for LTSA which naturally leads to a convergence result. We then derive the rate of convergence for LTSA in a special case. 1

4 0.076548725 194 nips-2008-Regularized Learning with Networks of Features

Author: Ted Sandler, John Blitzer, Partha P. Talukdar, Lyle H. Ungar

Abstract: For many supervised learning problems, we possess prior knowledge about which features yield similar information about the target variable. In predicting the topic of a document, we might know that two words are synonyms, and when performing image recognition, we know which pixels are adjacent. Such synonymous or neighboring features are near-duplicates and should be expected to have similar weights in an accurate model. Here we present a framework for regularized learning when one has prior knowledge about which features are expected to have similar and dissimilar weights. The prior knowledge is encoded as a network whose vertices are features and whose edges represent similarities and dissimilarities between them. During learning, each feature’s weight is penalized by the amount it differs from the average weight of its neighbors. For text classification, regularization using networks of word co-occurrences outperforms manifold learning and compares favorably to other recently proposed semi-supervised learning methods. For sentiment analysis, feature networks constructed from declarative human knowledge significantly improve prediction accuracy. 1

5 0.070554167 241 nips-2008-Transfer Learning by Distribution Matching for Targeted Advertising

Author: Steffen Bickel, Christoph Sawade, Tobias Scheffer

Abstract: We address the problem of learning classifiers for several related tasks that may differ in their joint distribution of input and output variables. For each task, small – possibly even empty – labeled samples and large unlabeled samples are available. While the unlabeled samples reflect the target distribution, the labeled samples may be biased. This setting is motivated by the problem of predicting sociodemographic features for users of web portals, based on the content which they have accessed. Here, questionnaires offered to a portion of each portal’s users produce biased samples. We derive a transfer learning procedure that produces resampling weights which match the pool of all examples to the target distribution of any given task. Transfer learning enables us to make predictions even for new portals with few or no training data and improves the overall prediction accuracy. 1

6 0.069058917 80 nips-2008-Extended Grassmann Kernels for Subspace-Based Learning

7 0.064010493 84 nips-2008-Fast Prediction on a Tree

8 0.059018843 179 nips-2008-Phase transitions for high-dimensional joint support recovery

9 0.05742836 201 nips-2008-Robust Near-Isometric Matching via Structured Learning of Graphical Models

10 0.055021107 69 nips-2008-Efficient Exact Inference in Planar Ising Models

11 0.05392972 122 nips-2008-Learning with Consistency between Inductive Functions and Kernels

12 0.050550029 12 nips-2008-Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes

13 0.049698342 61 nips-2008-Diffeomorphic Dimensionality Reduction

14 0.048388582 202 nips-2008-Robust Regression and Lasso

15 0.047381606 126 nips-2008-Localized Sliced Inverse Regression

16 0.046685513 135 nips-2008-Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of \boldmath$\ell 1$-regularized MLE

17 0.045920491 189 nips-2008-Rademacher Complexity Bounds for Non-I.I.D. Processes

18 0.04438768 62 nips-2008-Differentiable Sparse Coding

19 0.043747157 25 nips-2008-An interior-point stochastic approximation method and an L1-regularized delta rule

20 0.043516155 145 nips-2008-Multi-stage Convex Relaxation for Learning with Sparse Regularization


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.161), (1, -0.021), (2, -0.02), (3, 0.065), (4, 0.058), (5, 0.001), (6, 0.021), (7, -0.01), (8, 0.024), (9, 0.004), (10, 0.049), (11, -0.032), (12, -0.085), (13, -0.006), (14, -0.026), (15, 0.004), (16, 0.029), (17, 0.129), (18, -0.02), (19, 0.029), (20, -0.051), (21, -0.088), (22, 0.005), (23, -0.118), (24, -0.026), (25, 0.123), (26, 0.035), (27, 0.016), (28, 0.06), (29, 0.171), (30, -0.007), (31, -0.102), (32, 0.099), (33, 0.018), (34, -0.0), (35, -0.218), (36, -0.092), (37, 0.087), (38, 0.003), (39, -0.133), (40, 0.011), (41, -0.125), (42, 0.005), (43, 0.11), (44, -0.007), (45, 0.14), (46, -0.074), (47, -0.007), (48, 0.054), (49, -0.155)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93284142 151 nips-2008-Non-parametric Regression Between Manifolds

Author: Florian Steinke, Matthias Hein

Abstract: This paper discusses non-parametric regression between Riemannian manifolds. This learning problem arises frequently in many application areas ranging from signal processing, computer vision, over robotics to computer graphics. We present a new algorithmic scheme for the solution of this general learning problem based on regularized empirical risk minimization. The regularization functional takes into account the geometry of input and output manifold, and we show that it implements a prior which is particularly natural. Moreover, we demonstrate that our algorithm performs well in a difficult surface registration problem. 1

2 0.79333174 60 nips-2008-Designing neurophysiology experiments to optimally constrain receptive field models along parametric submanifolds

Author: Jeremy Lewi, Robert Butera, David M. Schneider, Sarah Woolley, Liam Paninski

Abstract: Sequential optimal design methods hold great promise for improving the efficiency of neurophysiology experiments. However, previous methods for optimal experimental design have incorporated only weak prior information about the underlying neural system (e.g., the sparseness or smoothness of the receptive field). Here we describe how to use stronger prior information, in the form of parametric models of the receptive field, in order to construct optimal stimuli and further improve the efficiency of our experiments. For example, if we believe that the receptive field is well-approximated by a Gabor function, then our method constructs stimuli that optimally constrain the Gabor parameters (orientation, spatial frequency, etc.) using as few experimental trials as possible. More generally, we may believe a priori that the receptive field lies near a known sub-manifold of the full parameter space; in this case, our method chooses stimuli in order to reduce the uncertainty along the tangent space of this sub-manifold as rapidly as possible. Applications to simulated and real data indicate that these methods may in many cases improve the experimental efficiency. 1

3 0.70111036 51 nips-2008-Convergence and Rate of Convergence of a Manifold-Based Dimension Reduction Algorithm

Author: Andrew Smith, Hongyuan Zha, Xiao-ming Wu

Abstract: We study the convergence and the rate of convergence of a local manifold learning algorithm: LTSA [13]. The main technical tool is the perturbation analysis on the linear invariant subspace that corresponds to the solution of LTSA. We derive a worst-case upper bound of errors for LTSA which naturally leads to a convergence result. We then derive the rate of convergence for LTSA in a special case. 1

4 0.51862776 188 nips-2008-QUIC-SVD: Fast SVD Using Cosine Trees

Author: Michael P. Holmes, Jr. Isbell, Charles Lee, Alexander G. Gray

Abstract: The Singular Value Decomposition is a key operation in many machine learning methods. Its computational cost, however, makes it unscalable and impractical for applications involving large datasets or real-time responsiveness, which are becoming increasingly common. We present a new method, QUIC-SVD, for fast approximation of the whole-matrix SVD based on a new sampling mechanism called the cosine tree. Our empirical tests show speedups of several orders of magnitude over exact SVD. Such scalability should enable QUIC-SVD to accelerate and enable a wide array of SVD-based methods and applications. 1

5 0.50006992 126 nips-2008-Localized Sliced Inverse Regression

Author: Qiang Wu, Sayan Mukherjee, Feng Liang

Abstract: We developed localized sliced inverse regression for supervised dimension reduction. It has the advantages of preventing degeneracy, increasing estimation accuracy, and automatic subclass discovery in classification problems. A semisupervised version is proposed for the use of unlabeled data. The utility is illustrated on simulated as well as real data sets.

6 0.43131799 122 nips-2008-Learning with Consistency between Inductive Functions and Kernels

7 0.34478 68 nips-2008-Efficient Direct Density Ratio Estimation for Non-stationarity Adaptation and Outlier Detection

8 0.33895376 61 nips-2008-Diffeomorphic Dimensionality Reduction

9 0.32829833 80 nips-2008-Extended Grassmann Kernels for Subspace-Based Learning

10 0.32198551 23 nips-2008-An ideal observer model of infant object perception

11 0.31456873 124 nips-2008-Load and Attentional Bayes

12 0.30097774 30 nips-2008-Bayesian Experimental Design of Magnetic Resonance Imaging Sequences

13 0.29822227 194 nips-2008-Regularized Learning with Networks of Features

14 0.29246891 78 nips-2008-Exact Convex Confidence-Weighted Learning

15 0.29058954 185 nips-2008-Privacy-preserving logistic regression

16 0.28968102 196 nips-2008-Relative Margin Machines

17 0.28113839 96 nips-2008-Hebbian Learning of Bayes Optimal Decisions

18 0.26677617 84 nips-2008-Fast Prediction on a Tree

19 0.2665866 76 nips-2008-Estimation of Information Theoretic Measures for Continuous Random Variables

20 0.26647165 25 nips-2008-An interior-point stochastic approximation method and an L1-regularized delta rule


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(6, 0.038), (7, 0.119), (12, 0.031), (15, 0.015), (28, 0.15), (57, 0.078), (59, 0.021), (63, 0.028), (64, 0.267), (71, 0.012), (77, 0.059), (81, 0.011), (83, 0.076)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.83774352 105 nips-2008-Improving on Expectation Propagation

Author: Manfred Opper, Ulrich Paquet, Ole Winther

Abstract: A series of corrections is developed for the fixed points of Expectation Propagation (EP), which is one of the most popular methods for approximate probabilistic inference. These corrections can lead to improvements of the inference approximation or serve as a sanity check, indicating when EP yields unrealiable results.

same-paper 2 0.78339249 151 nips-2008-Non-parametric Regression Between Manifolds

Author: Florian Steinke, Matthias Hein

Abstract: This paper discusses non-parametric regression between Riemannian manifolds. This learning problem arises frequently in many application areas ranging from signal processing, computer vision, over robotics to computer graphics. We present a new algorithmic scheme for the solution of this general learning problem based on regularized empirical risk minimization. The regularization functional takes into account the geometry of input and output manifold, and we show that it implements a prior which is particularly natural. Moreover, we demonstrate that our algorithm performs well in a difficult surface registration problem. 1

3 0.77585089 108 nips-2008-Integrating Locally Learned Causal Structures with Overlapping Variables

Author: David Danks, Clark Glymour, Robert E. Tillman

Abstract: In many domains, data are distributed among datasets that share only some variables; other recorded variables may occur in only one dataset. While there are asymptotically correct, informative algorithms for discovering causal relationships from a single dataset, even with missing values and hidden variables, there have been no such reliable procedures for distributed data with overlapping variables. We present a novel, asymptotically correct procedure that discovers a minimal equivalence class of causal DAG structures using local independence information from distributed data of this form and evaluate its performance using synthetic and real-world data against causal discovery algorithms for single datasets and applying Structural EM, a heuristic DAG structure learning procedure for data with missing values, to the concatenated data.

4 0.76092696 123 nips-2008-Linear Classification and Selective Sampling Under Low Noise Conditions

Author: Giovanni Cavallanti, Nicolò Cesa-bianchi, Claudio Gentile

Abstract: We provide a new analysis of an efficient margin-based algorithm for selective sampling in classification problems. Using the so-called Tsybakov low noise condition to parametrize the instance distribution, we show bounds on the convergence rate to the Bayes risk of both the fully supervised and the selective sampling versions of the basic algorithm. Our analysis reveals that, excluding logarithmic factors, the average risk of the selective sampler converges to the Bayes risk at rate N −(1+α)(2+α)/2(3+α) where N denotes the number of √ queried labels, and α > 0 is the exponent in the low noise condition. For all α > 3 − 1 ≈ 0.73 this convergence rate is asymptotically faster than the rate N −(1+α)/(2+α) achieved by the fully supervised version of the same classifier, which queries all labels, and for α → ∞ the two rates exhibit an exponential gap. Experiments on textual data reveal that simple variants of the proposed selective sampler perform much better than popular and similarly efficient competitors. 1

5 0.63241887 63 nips-2008-Dimensionality Reduction for Data in Multiple Feature Representations

Author: Yen-yu Lin, Tyng-luh Liu, Chiou-shann Fuh

Abstract: In solving complex visual learning tasks, adopting multiple descriptors to more precisely characterize the data has been a feasible way for improving performance. These representations are typically high dimensional and assume diverse forms. Thus finding a way to transform them into a unified space of lower dimension generally facilitates the underlying tasks, such as object recognition or clustering. We describe an approach that incorporates multiple kernel learning with dimensionality reduction (MKL-DR). While the proposed framework is flexible in simultaneously tackling data in various feature representations, the formulation itself is general in that it is established upon graph embedding. It follows that any dimensionality reduction techniques explainable by graph embedding can be generalized by our method to consider data in multiple feature representations.

6 0.62895709 194 nips-2008-Regularized Learning with Networks of Features

7 0.62500262 71 nips-2008-Efficient Sampling for Gaussian Process Inference using Control Variables

8 0.62482977 79 nips-2008-Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

9 0.62467223 64 nips-2008-DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification

10 0.62283933 205 nips-2008-Semi-supervised Learning with Weakly-Related Unlabeled Data : Towards Better Text Categorization

11 0.62148136 200 nips-2008-Robust Kernel Principal Component Analysis

12 0.62081748 192 nips-2008-Reducing statistical dependencies in natural signals using radial Gaussianization

13 0.62045485 248 nips-2008-Using matrices to model symbolic relationship

14 0.61967897 138 nips-2008-Modeling human function learning with Gaussian processes

15 0.61913866 62 nips-2008-Differentiable Sparse Coding

16 0.61828959 66 nips-2008-Dynamic visual attention: searching for coding length increments

17 0.61736119 95 nips-2008-Grouping Contours Via a Related Image

18 0.61640656 221 nips-2008-Stochastic Relational Models for Large-scale Dyadic Data using MCMC

19 0.61519819 99 nips-2008-High-dimensional support union recovery in multivariate regression

20 0.6146521 176 nips-2008-Partially Observed Maximum Entropy Discrimination Markov Networks