iccv iccv2013 iccv2013-100 knowledge-graph by maker-knowledge-mining

100 iccv-2013-Curvature-Aware Regularization on Riemannian Submanifolds


Source: pdf

Author: Kwang In Kim, James Tompkin, Christian Theobalt

Abstract: One fundamental assumption in object recognition as well as in other computer vision and pattern recognition problems is that the data generation process lies on a manifold and that it respects the intrinsic geometry of the manifold. This assumption is held in several successful algorithms for diffusion and regularization, in particular, in graph-Laplacian-based algorithms. We claim that the performance of existing algorithms can be improved if we additionally account for how the manifold is embedded within the ambient space, i.e., if we consider the extrinsic geometry of the manifold. We present a procedure for characterizing the extrinsic (as well as intrinsic) curvature of a manifold M which is described by a sampled point cloud in a high-dimensional Euclidean space. Once estimated, we use this characterization in general diffusion and regularization on M, and form a new regularizer on a point cloud. The resulting re-weighted graph Laplacian demonstrates superior performance over classical graph Laplacian in semisupervised learning and spectral clustering.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This assumption is held in several successful algorithms for diffusion and regularization, in particular, in graph-Laplacian-based algorithms. [sent-2, score-0.353]

2 We claim that the performance of existing algorithms can be improved if we additionally account for how the manifold is embedded within the ambient space, i. [sent-3, score-0.358]

3 , if we consider the extrinsic geometry of the manifold. [sent-5, score-0.291]

4 We present a procedure for characterizing the extrinsic (as well as intrinsic) curvature of a manifold M which is described by a sampled point cloud in a high-dimensional Euclidean space. [sent-6, score-0.696]

5 Once estimated, we use this characterization in general diffusion and regularization on M, and form a new regularizer on a point cloud. [sent-7, score-0.556]

6 The resulting re-weighted graph Laplacian demonstrates superior performance over classical graph Laplacian in semisupervised learning and spectral clustering. [sent-8, score-0.147]

7 Introduction One of the fundamental assumptions in manifold-based data processing algorithms is that the intrinsic geometry of a manifold is relevant to the data which lie upon it. [sent-10, score-0.49]

8 For instance, the graph Laplacian matrix is used to measure the pair-wise dissimilarities of the evaluation of a function f on a given point cloud X, and subsequently this can be used faor g idviesncre potizinetd c dliofufudsi Xo,n aanndd regularization ohifs sf c on bXe. [sent-11, score-0.289]

9 uOsened way of justifying ftfhues use oafnd th ree graph Laplacian comes . [sent-12, score-0.056]

10 , dwathaen X Xth ies corresponding probability distribution P has support in M, the graph Laplacian converges to the Laplace-Beltrami operator [2, 10] that respects only the intrinsic geometry of M. [sent-15, score-0.438]

11 Accordingly, for a large X, the graph Laplacian helps us measure tinheg vy,a frioarti oan la orfg efu Xnc,t tihones g along Map aancida neglect any random perturbations normal to M that might be irrelevant noise. [sent-16, score-0.232]

12 , [7, 13]) are justified in exploiting intrinsic geometry by successes in semi-supervised learning, spectral clustering, and dimensionality reduction applications. [sent-19, score-0.265]

13 In this paper, we question a fundamental assumption of manifold-based algorithms. [sent-20, score-0.059]

14 It is well known that the extrinsic geometry of M, that is, how M is embedded in an ambient space, is important for image and mesh surface processing. [sent-21, score-0.639]

15 However, is the extrinsic geometry relevant at all for high-dimensional data processing? [sent-22, score-0.291]

16 The anisotropic diffusion process on manifolds mo- tivates our question above, and connects the highdimensional data processing problem with low-dimensional image and mesh surface processing (Sec. [sent-24, score-0.901]

17 The anisotropic diffusion process exploits the extrinsic (as well as intrinsic) geometry, and we discuss how this can be extended to any sub-manifold with arbitrary dimension and co-dimension. [sent-26, score-0.768]

18 This presents a practical diffusion and regularization scheme which can be applied even when the manifold is not observed directly but is indirectly presented as a sampled point cloud (Sec. [sent-27, score-0.664]

19 This regularization leads to a re-weighted graph Laplacian, which we evaluate in the context of semi-supervised learning and spectral clustering, and discover that the new algorithm significantly improves the performance over classical graph Laplacian (Sec. [sent-29, score-0.247]

20 Anisotropic diffusion on manifolds The Laplace-Beltrami operator Δ on a manifold M is defined from the divergence and gradient operators: Δf = −div grad f, (1) where f is a smooth function on M. [sent-32, score-0.708]

21 This is one of the most important operators in differential geometry and is applied to describe physical phenomena on M. [sent-33, score-0.221]

22 In particular, it is the generator of the isotropic diffusion process: ∂∂ft= −Δf (2) which describes the evolution of f on M as how the values of f spread over time. [sent-34, score-0.509]

23 This process is isotropic and homogeneous in the sense that the diffusivity is the same for 888811 any location and any direction on M. [sent-35, score-0.495]

24 When f represents an image as a two-dimensional manifold embedded in R3 (x, y, f(x, y)), it can be shown that evolving f according to Equ. [sent-36, score-0.253]

25 It is often desirable to non-uniformly distribute the diffusivity, as shown by many image processing applications. [sent-38, score-0.045]

26 For instance, in image denoising, important structures such as edges should remain unchanged, so diffusion should be weak near edges. [sent-39, score-0.4]

27 Furthermore, diffusion should be stronger in the direction along edges rather than across edges. [sent-40, score-0.425]

28 This can be realized with anisotropic diffusion on R2 [19, 18]: ∂∂ft= −ΔDf := divDgradf, (3) where D is a positive definite operator that controls the strength and direction of diffusion. [sent-41, score-0.769]

29 [19], construct D from the tensor product of gradf with itself. [sent-43, score-0.039]

30 A similar approach has also been taken for processing a two-dimensional surface embedded in R3. [sent-46, score-0.248]

31 A typical application is surface processing where f represents the three-dimensional locations of sam- pled surface points in R3 [6, 5, 21]. [sent-47, score-0.257]

32 In this context, D can be constructed based on how the surfaces are curved in R3. [sent-48, score-0.156]

33 We wish the diffusivity to be strong for planar regions and weak across highly curved regions. [sent-49, score-0.515]

34 [5] proposed constructing D based on the principal curvatures and the corresponding principal directions at each location on the surface. [sent-51, score-0.176]

35 The resulting diffusion process smooths flat regions and enhances ridges on the surface. [sent-52, score-0.353]

36 A similar effect can also be obtained by diffusing surface normal vectors using mean and Gaussian curvatures [21]. [sent-53, score-0.361]

37 Anisotropic diffusion has been successful in processing two-dimensional objects embedded in R3 such as images and surfaces (in which the normal is uniquely defined up to the change of sign); however, its application to highdimensional data has not yet been explored. [sent-54, score-0.724]

38 The aim of our paper is to extend this framework to construct a generator of anisotropic diffusion processes (ΔD) and, with it, to build a discretized anisotropic regularizer on X. [sent-55, score-0.944]

39 lWde a fi drisstc rneottiez ethda at nthiseo Laplace-Beltrami operator (Equ. [sent-56, score-0.139]

40 1) can also be used as a regularizer on a manifold. [sent-57, score-0.103]

41 The connection between the two aspects of Δ as a regularizer and as a generator of isotropic diffusion processes on M is well estab- lished: Intuitively, from the regularization perspective, minimizing ? [sent-65, score-0.748]

42 2Δ corresponds to penalizing the variation of f sic curvature at the green dot. [sent-67, score-0.27]

43 Since it has zero intrinsic curvature here, intrinsically it is equivalent to R2 . [sent-68, score-0.497]

44 3) is straightforward: with ΔD as a regularizer, we emphasize the variation of f along the direction of high diffusivity. [sent-72, score-0.072]

45 1visualizes the underlying idea with an example of a two-dimensional surface embedded in R3. [sent-75, score-0.203]

46 In this example, the red arrows pass through planar regions, and here diffusivity should be strong in the directions of the red arrows. [sent-76, score-0.486]

47 Conversely, the blue arrow passes through a highly extrinsically curved region that corresponds to a boundary between two manifolds. [sent-77, score-0.152]

48 Here, the diffusivity should be weak in the direction of the blue arrow. [sent-78, score-0.474]

49 However, existing manifold-based data diffusion and regularization operators are not capable of this (e. [sent-79, score-0.598]

50 1is intrinsically identical to R2, these operators do not distinguish between the two spaces. [sent-84, score-0.218]

51 In particular, diffusivity is the same at every point in the surface and in R2. [sent-85, score-0.461]

52 2 shows the results of our preliminary pattern classification experiment, where the directions of estimated high curvature are often perpendicular to the directions of class decision boundaries. [sent-88, score-0.547]

53 This supports the idea of controlling diffusivity based on the direction and strength of both intrinsic and extrinsic curvature. [sent-89, score-0.796]

54 To build upon this, we next discuss a procedure which estimates the extrinsic and intrinsic curvature and, with it, develops a practical regularization operator on a manifold. [sent-90, score-0.839]

55 In general, sub-manifold curvature manifests both intrinsically and extrinsically. [sent-92, score-0.416]

56 888822 × origin) projected onto Riemannian normal coordinates. [sent-93, score-0.137]

57 Circles and crosses represent two classes while magenta lines show the direction of highest curvature. [sent-98, score-0.072]

58 This direction is the first eigenvector of the generalized shape operator. [sent-99, score-0.072]

59 Estimated high curvature directions are often perpendicular to decision boundary directions: First two columns: the directions are strongly inversely correlated. [sent-100, score-0.547]

60 Third column: the directions pass through multiple decision boundaries but are still perpendicular. [sent-101, score-0.169]

61 Last column: the directions are less strongly correlated but still reasonable. [sent-102, score-0.097]

62 Curvature-aware regularization In general, the curvature of a Riemannian manifold M is captured by a fourth-order tensor called the Riemann curvature tensor. [sent-104, score-0.835]

63 Then, how the manifold M (of dimension m) is curved with respect to the ambient manifold M? [sent-105, score-0.53]

64 15] states that this quantity is completely determined by a third-order operator called the second fundamental form. [sent-109, score-0.159]

65 viates from the intrinsic derivative ∇: For X, Y ∈ T (M∇? [sent-121, score-0.154]

66 ∈ M, evaluating II(X, Y ) corresponds to projecting onto normal space Np(M) ⊂ N(M). [sent-125, score-0.137]

67 r insight into its geometric characteristics, we represent IIin a special coordinate frame. [sent-128, score-0.075]

68 The analysis in the reminder of this section focuses entirely on a coordinate chart at a point p ∈ M and, accordingly, without loss of generality we focus on IIevaluated at p. [sent-129, score-0.205]

69 First, we c)o ⊂nst Tru (cMt an adapted orthonormal frame [14, 15] {Y1, . [sent-133, score-0.072]

70 , Yn} which specifies an orthonormal coordinate chart {y1, . [sent-136, score-0.238]

71 Then, in wtheh ecroem {bxined coordi}na iste as {coxo1 , . [sent-166, score-0.036]

72 (7) This representation not only facilitates the subsequent computation but also clearly manifests the geometrical significance of the second funda? [sent-179, score-0.073]

73 at p as hyper-surfaces, 888833 each of which characterizes how the corresponding surface is bending, i. [sent-191, score-0.106]

74 This simplicity in the representation of IIis due to the use of the Riemannian normal coordinate in M? [sent-194, score-0.212]

75 , in which the manifold appears Euclidean up to second-o? [sent-195, score-0.156]

76 We have just seen how to characterize the curvature of any arbitrary Riemannian submanifold M with codimensionality higher than 1. [sent-206, score-0.306]

77 Our next step is to build a generalization of the operator Dp : T(M) → T(M) in Equation 3 using the second fundamentTal( Mfor)m → →II T. [sent-207, score-0.1]

78 (FMirs)t, i we q ruaaisteio tnhe 3 f uisrsitn gintd heex s oefc IonId di fnu Mnda: I ? [sent-208, score-0.034]

79 (8) Then, the generalized (absolute) shape operator s : T(M) → T(M) is constructed by casting the individual Hse :s Ts(iaMns) positive d)ef i sn cioten2s tarundc removing nthge hnoer imndaliv component by tak? [sent-216, score-0.1]

80 ing the inner product of the normal component of II? [sent-217, score-0.137]

81 m+ 1 where |A|P is a positive definite version of a matrix A. [sent-230, score-0.044]

82 The lwashte step m|akes s depending on the choice of the normal frame {Yi}in=m+1 which we fix by exploiting the distributfiroanm oef { Ythe} data on M (see Sec. [sent-231, score-0.137]

83 3 In informal terms, s receives a vector Zp ∈ TpM and magnifies or reduces in x each of its components T{Zi} depending on h reowdu ctehes corresponding ccoomorpdoinneatnet sd {irZec}tio dnes{ ∂x∂i } are curved in {ym+1 , . [sent-233, score-0.149]

84 cu =rva 1t,u cre(0 o)f = =c( p0), awnhde cr˙ e(p c i s= in Zterpreted as a eosnpeo-nddimse tons tihoen caul rsvuabtumrean oiffo cl(d0 o)f w Rhen. [sent-254, score-0.075]

85 Accordingly, the correspond- ing diffusivity depends only on the curvature direction and magnitude. [sent-256, score-0.697]

86 3Another way of constructing the orthonormal frame {Ym+1 , . [sent-258, score-0.072]

87 , Yn } ⊂ N(M) is to choose each normal vector Yi ? [sent-261, score-0.137]

88 incrementally by maximizing t ihse squared norm noofr Yi-component ? [sent-262, score-0.034]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('diffusivity', 0.355), ('diffusion', 0.353), ('curvature', 0.27), ('extrinsic', 0.215), ('anisotropic', 0.2), ('riemannian', 0.171), ('manifold', 0.156), ('intrinsic', 0.154), ('laplacian', 0.154), ('operators', 0.145), ('normal', 0.137), ('zp', 0.121), ('curved', 0.113), ('surface', 0.106), ('ambient', 0.105), ('regularizer', 0.103), ('operator', 0.1), ('regularization', 0.1), ('directions', 0.097), ('embedded', 0.097), ('chart', 0.091), ('dxrdxs', 0.089), ('ngent', 0.089), ('yixs', 0.089), ('generator', 0.088), ('yn', 0.085), ('xm', 0.083), ('dxs', 0.079), ('curvatures', 0.079), ('tp', 0.077), ('geometry', 0.076), ('coordinate', 0.075), ('manifests', 0.073), ('intrinsically', 0.073), ('direction', 0.072), ('orthonormal', 0.072), ('isotropic', 0.068), ('accordingly', 0.064), ('manifolds', 0.063), ('yi', 0.061), ('xy', 0.061), ('fundamental', 0.059), ('graph', 0.056), ('cloud', 0.055), ('respects', 0.052), ('ii', 0.051), ('ym', 0.049), ('highdimensional', 0.049), ('weak', 0.047), ('dv', 0.045), ('perpendicular', 0.045), ('hessian', 0.045), ('processing', 0.045), ('definite', 0.044), ('surfaces', 0.043), ('mesh', 0.04), ('lished', 0.039), ('alues', 0.039), ('coordi', 0.039), ('diffusing', 0.039), ('ethda', 0.039), ('extrinsically', 0.039), ('faor', 0.039), ('ionid', 0.039), ('kwang', 0.039), ('ohifs', 0.039), ('oiffo', 0.039), ('reminder', 0.039), ('rva', 0.039), ('tak', 0.039), ('tihones', 0.039), ('tompkin', 0.039), ('tensor', 0.039), ('decision', 0.038), ('sinp', 0.036), ('generali', 0.036), ('grad', 0.036), ('nian', 0.036), ('conditio', 0.036), ('cre', 0.036), ('haet', 0.036), ('informal', 0.036), ('informatik', 0.036), ('iste', 0.036), ('ojf', 0.036), ('riemann', 0.036), ('submanifold', 0.036), ('tons', 0.036), ('tpm', 0.036), ('tthriec', 0.036), ('connection', 0.036), ('mn', 0.035), ('spectral', 0.035), ('mand', 0.034), ('zed', 0.034), ('tru', 0.034), ('fnu', 0.034), ('mfor', 0.034), ('noofr', 0.034), ('pass', 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 100 iccv-2013-Curvature-Aware Regularization on Riemannian Submanifolds

Author: Kwang In Kim, James Tompkin, Christian Theobalt

Abstract: One fundamental assumption in object recognition as well as in other computer vision and pattern recognition problems is that the data generation process lies on a manifold and that it respects the intrinsic geometry of the manifold. This assumption is held in several successful algorithms for diffusion and regularization, in particular, in graph-Laplacian-based algorithms. We claim that the performance of existing algorithms can be improved if we additionally account for how the manifold is embedded within the ambient space, i.e., if we consider the extrinsic geometry of the manifold. We present a procedure for characterizing the extrinsic (as well as intrinsic) curvature of a manifold M which is described by a sampled point cloud in a high-dimensional Euclidean space. Once estimated, we use this characterization in general diffusion and regularization on M, and form a new regularizer on a point cloud. The resulting re-weighted graph Laplacian demonstrates superior performance over classical graph Laplacian in semisupervised learning and spectral clustering.

2 0.23070566 296 iccv-2013-On the Mean Curvature Flow on Graphs with Applications in Image and Manifold Processing

Author: Abdallah El_Chakik, Abderrahim Elmoataz, Ahcene Sadi

Abstract: In this paper, we propose an adaptation and transcription of the mean curvature level set equation on a general discrete domain (weighted graphs with arbitrary topology). We introduce the perimeters on graph using difference operators and define the curvature as the first variation of these perimeters. Our proposed approach of mean curvature unifies both local and non local notions of mean curvature on Euclidean domains. Furthermore, it allows the extension to the processing of manifolds and data which can be represented by graphs.

3 0.21042807 309 iccv-2013-Partial Enumeration and Curvature Regularization

Author: Carl Olsson, Johannes Ulén, Yuri Boykov, Vladimir Kolmogorov

Abstract: Energies with high-order non-submodular interactions have been shown to be very useful in vision due to their high modeling power. Optimization of such energies, however, is generally NP-hard. A naive approach that works for small problem instances is exhaustive search, that is, enumeration of all possible labelings of the underlying graph. We propose a general minimization approach for large graphs based on enumeration of labelings of certain small patches. This partial enumeration technique reduces complex highorder energy formulations to pairwise Constraint Satisfaction Problems with unary costs (uCSP), which can be efficiently solved using standard methods like TRW-S. Our approach outperforms a number of existing state-of-the-art algorithms on well known difficult problems (e.g. curvature regularization, stereo, deconvolution); it gives near global minimum and better speed. Our main application of interest is curvature regularization. In the context of segmentation, our partial enumeration technique allows to evaluate curvature directly on small patches using a novel integral geometry approach. 1

4 0.19977081 389 iccv-2013-Shortest Paths with Curvature and Torsion

Author: Petter Strandmark, Johannes Ulén, Fredrik Kahl, Leo Grady

Abstract: This paper describes a method of finding thin, elongated structures in images and volumes. We use shortest paths to minimize very general functionals of higher-order curve properties, such as curvature and torsion. Our globally optimal method uses line graphs and its runtime is polynomial in the size of the discretization, often in the order of seconds on a single computer. To our knowledge, we are the first to perform experiments in three dimensions with curvature and torsion regularization. The largest graphs we process have almost one hundred billion arcs. Experiments on medical images and in multi-view reconstruction show the significance and practical usefulness of regularization based on curvature while torsion is still only tractable for small-scale problems.

5 0.14247285 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

Author: Jan Lellmann, Evgeny Strekalovskiy, Sabrina Koetter, Daniel Cremers

Abstract: While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories.

6 0.12512368 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

7 0.12248325 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects

8 0.11533463 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding

9 0.10962628 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces

10 0.10599015 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

11 0.10356462 114 iccv-2013-Dictionary Learning and Sparse Coding on Grassmann Manifolds: An Extrinsic Solution

12 0.093758725 16 iccv-2013-A Generic Deformation Model for Dense Non-rigid Surface Registration: A Higher-Order MRF-Based Approach

13 0.09194006 209 iccv-2013-Image Guided Depth Upsampling Using Anisotropic Total Generalized Variation

14 0.085591845 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

15 0.085066006 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects

16 0.085000664 307 iccv-2013-Parallel Transport of Deformations in Shape Space of Elastic Surfaces

17 0.084779903 56 iccv-2013-Automatic Registration of RGB-D Scans via Salient Directions

18 0.081644066 284 iccv-2013-Multiview Photometric Stereo Using Planar Mesh Parameterization

19 0.078239053 119 iccv-2013-Discriminant Tracking Using Tensor Representation with Semi-supervised Improvement

20 0.078078434 429 iccv-2013-Tree Shape Priors with Connectivity Constraints Using Convex Relaxation on General Graphs


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.148), (1, -0.087), (2, -0.056), (3, -0.015), (4, -0.065), (5, 0.059), (6, -0.003), (7, -0.054), (8, 0.062), (9, -0.12), (10, -0.06), (11, -0.006), (12, -0.071), (13, 0.071), (14, 0.158), (15, 0.026), (16, -0.012), (17, 0.012), (18, -0.013), (19, -0.055), (20, 0.06), (21, 0.12), (22, 0.141), (23, -0.067), (24, -0.052), (25, 0.163), (26, -0.05), (27, -0.03), (28, 0.157), (29, 0.023), (30, -0.199), (31, -0.035), (32, -0.087), (33, 0.045), (34, -0.0), (35, -0.059), (36, 0.052), (37, 0.063), (38, -0.118), (39, -0.015), (40, -0.008), (41, 0.016), (42, -0.048), (43, -0.06), (44, -0.077), (45, 0.008), (46, 0.027), (47, 0.069), (48, 0.004), (49, 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96559763 100 iccv-2013-Curvature-Aware Regularization on Riemannian Submanifolds

Author: Kwang In Kim, James Tompkin, Christian Theobalt

Abstract: One fundamental assumption in object recognition as well as in other computer vision and pattern recognition problems is that the data generation process lies on a manifold and that it respects the intrinsic geometry of the manifold. This assumption is held in several successful algorithms for diffusion and regularization, in particular, in graph-Laplacian-based algorithms. We claim that the performance of existing algorithms can be improved if we additionally account for how the manifold is embedded within the ambient space, i.e., if we consider the extrinsic geometry of the manifold. We present a procedure for characterizing the extrinsic (as well as intrinsic) curvature of a manifold M which is described by a sampled point cloud in a high-dimensional Euclidean space. Once estimated, we use this characterization in general diffusion and regularization on M, and form a new regularizer on a point cloud. The resulting re-weighted graph Laplacian demonstrates superior performance over classical graph Laplacian in semisupervised learning and spectral clustering.

2 0.81685108 296 iccv-2013-On the Mean Curvature Flow on Graphs with Applications in Image and Manifold Processing

Author: Abdallah El_Chakik, Abderrahim Elmoataz, Ahcene Sadi

Abstract: In this paper, we propose an adaptation and transcription of the mean curvature level set equation on a general discrete domain (weighted graphs with arbitrary topology). We introduce the perimeters on graph using difference operators and define the curvature as the first variation of these perimeters. Our proposed approach of mean curvature unifies both local and non local notions of mean curvature on Euclidean domains. Furthermore, it allows the extension to the processing of manifolds and data which can be represented by graphs.

3 0.77084851 389 iccv-2013-Shortest Paths with Curvature and Torsion

Author: Petter Strandmark, Johannes Ulén, Fredrik Kahl, Leo Grady

Abstract: This paper describes a method of finding thin, elongated structures in images and volumes. We use shortest paths to minimize very general functionals of higher-order curve properties, such as curvature and torsion. Our globally optimal method uses line graphs and its runtime is polynomial in the size of the discretization, often in the order of seconds on a single computer. To our knowledge, we are the first to perform experiments in three dimensions with curvature and torsion regularization. The largest graphs we process have almost one hundred billion arcs. Experiments on medical images and in multi-view reconstruction show the significance and practical usefulness of regularization based on curvature while torsion is still only tractable for small-scale problems.

4 0.70855117 309 iccv-2013-Partial Enumeration and Curvature Regularization

Author: Carl Olsson, Johannes Ulén, Yuri Boykov, Vladimir Kolmogorov

Abstract: Energies with high-order non-submodular interactions have been shown to be very useful in vision due to their high modeling power. Optimization of such energies, however, is generally NP-hard. A naive approach that works for small problem instances is exhaustive search, that is, enumeration of all possible labelings of the underlying graph. We propose a general minimization approach for large graphs based on enumeration of labelings of certain small patches. This partial enumeration technique reduces complex highorder energy formulations to pairwise Constraint Satisfaction Problems with unary costs (uCSP), which can be efficiently solved using standard methods like TRW-S. Our approach outperforms a number of existing state-of-the-art algorithms on well known difficult problems (e.g. curvature regularization, stereo, deconvolution); it gives near global minimum and better speed. Our main application of interest is curvature regularization. In the context of segmentation, our partial enumeration technique allows to evaluate curvature directly on small patches using a novel integral geometry approach. 1

5 0.60861105 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

Author: Jan Lellmann, Evgeny Strekalovskiy, Sabrina Koetter, Daniel Cremers

Abstract: While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories.

6 0.44638568 284 iccv-2013-Multiview Photometric Stereo Using Planar Mesh Parameterization

7 0.44043157 307 iccv-2013-Parallel Transport of Deformations in Shape Space of Elastic Surfaces

8 0.43445957 347 iccv-2013-Recursive Estimation of the Stein Center of SPD Matrices and Its Applications

9 0.42642662 16 iccv-2013-A Generic Deformation Model for Dense Non-rigid Surface Registration: A Higher-Order MRF-Based Approach

10 0.4210428 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

11 0.40921503 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects

12 0.40835676 183 iccv-2013-Geometric Registration Based on Distortion Estimation

13 0.39659274 30 iccv-2013-A Simple Model for Intrinsic Image Decomposition with Depth Cues

14 0.38757733 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects

15 0.38700604 134 iccv-2013-Efficient Higher-Order Clustering on the Grassmann Manifold

16 0.37354001 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces

17 0.37214816 178 iccv-2013-From Semi-supervised to Transfer Counting of Crowds

18 0.36741641 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error

19 0.36691085 429 iccv-2013-Tree Shape Priors with Connectivity Constraints Using Convex Relaxation on General Graphs

20 0.35870585 262 iccv-2013-Matching Dry to Wet Materials


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.044), (7, 0.04), (26, 0.058), (31, 0.063), (35, 0.013), (40, 0.021), (42, 0.112), (48, 0.022), (62, 0.244), (64, 0.036), (73, 0.048), (89, 0.178), (95, 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.83381307 100 iccv-2013-Curvature-Aware Regularization on Riemannian Submanifolds

Author: Kwang In Kim, James Tompkin, Christian Theobalt

Abstract: One fundamental assumption in object recognition as well as in other computer vision and pattern recognition problems is that the data generation process lies on a manifold and that it respects the intrinsic geometry of the manifold. This assumption is held in several successful algorithms for diffusion and regularization, in particular, in graph-Laplacian-based algorithms. We claim that the performance of existing algorithms can be improved if we additionally account for how the manifold is embedded within the ambient space, i.e., if we consider the extrinsic geometry of the manifold. We present a procedure for characterizing the extrinsic (as well as intrinsic) curvature of a manifold M which is described by a sampled point cloud in a high-dimensional Euclidean space. Once estimated, we use this characterization in general diffusion and regularization on M, and form a new regularizer on a point cloud. The resulting re-weighted graph Laplacian demonstrates superior performance over classical graph Laplacian in semisupervised learning and spectral clustering.

2 0.81183708 55 iccv-2013-Automatic Kronecker Product Model Based Detection of Repeated Patterns in 2D Urban Images

Author: Juan Liu, Emmanouil Psarakis, Ioannis Stamos

Abstract: Repeated patterns (such as windows, tiles, balconies and doors) are prominent and significant features in urban scenes. Therefore, detection of these repeated patterns becomes very important for city scene analysis. This paper attacks the problem of repeated patterns detection in a precise, efficient and automatic way, by combining traditional feature extraction followed by a Kronecker product lowrank modeling approach. Our method is tailored for 2D images of building fac ¸ades. We have developed algorithms for automatic selection ofa representative texture withinfa ¸cade images using vanishing points and Harris corners. After rectifying the input images, we describe novel algorithms that extract repeated patterns by using Kronecker product based modeling that is based on a solid theoretical foundation. Our approach is unique and has not ever been used for fac ¸ade analysis. We have tested our algorithms in a large set of images.

3 0.7592181 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

Author: Meng Yang, Luc Van_Gool, Lei Zhang

Abstract: Face recognition (FR) with a single training sample per person (STSPP) is a very challenging problem due to the lack of information to predict the variations in the query sample. Sparse representation based classification has shown interesting results in robust FR; however, its performance will deteriorate much for FR with STSPP. To address this issue, in this paper we learn a sparse variation dictionary from a generic training set to improve the query sample representation by STSPP. Instead of learning from the generic training set independently w.r.t. the gallery set, the proposed sparse variation dictionary learning (SVDL) method is adaptive to the gallery set by jointly learning a projection to connect the generic training set with the gallery set. The learnt sparse variation dictionary can be easily integrated into the framework of sparse representation based classification so that various variations in face images, including illumination, expression, occlusion, pose, etc., can be better handled. Experiments on the large-scale CMU Multi-PIE, FRGC and LFW databases demonstrate the promising performance of SVDL on FR with STSPP.

4 0.73874855 321 iccv-2013-Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model

Author: Xiang Yu, Junzhou Huang, Shaoting Zhang, Wang Yan, Dimitris N. Metaxas

Abstract: This paper addresses the problem of facial landmark localization and tracking from a single camera. We present a two-stage cascaded deformable shape model to effectively and efficiently localize facial landmarks with large head pose variations. For face detection, we propose a group sparse learning method to automatically select the most salient facial landmarks. By introducing 3D face shape model, we use procrustes analysis to achieve pose-free facial landmark initialization. For deformation, the first step uses mean-shift local search with constrained local model to rapidly approach the global optimum. The second step uses component-wise active contours to discriminatively refine the subtle shape variation. Our framework can simultaneously handle face detection, pose-free landmark localization and tracking in real time. Extensive experiments are conducted on both laboratory environmental face databases and face-in-the-wild databases. All results demonstrate that our approach has certain advantages over state-of-theart methods in handling pose variations1.

5 0.7349425 397 iccv-2013-Space-Time Tradeoffs in Photo Sequencing

Author: Tali Dekel_(Basha), Yael Moses, Shai Avidan

Abstract: Photo-sequencing is the problem of recovering the temporal order of a set of still images of a dynamic event, taken asynchronously by a set of uncalibrated cameras. Solving this problem is a first, crucial step for analyzing (or visualizing) the dynamic content of the scene captured by a large number of freely moving spectators. We propose a geometric based solution, followed by rank aggregation to . ac . i l avidan@ eng .t au . ac . i l the photo-sequencing problem. Our algorithm trades spatial certainty for temporal certainty. Whereas the previous solution proposed by [4] relies on two images taken from the same static camera to eliminate uncertainty in space, we drop the static-camera assumption and replace it with temporal information available from images taken from the same (moving) camera. Our method thus overcomes the limitation of the static-camera assumption, and scales much better with the duration of the event and the spread of cameras in space. We present successful results on challenging real data sets and large scale synthetic data (250 images).

6 0.69343448 376 iccv-2013-Scene Text Localization and Recognition with Oriented Stroke Detection

7 0.68732226 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

8 0.68540132 79 iccv-2013-Coherent Object Detection with 3D Geometric Context from a Single Image

9 0.68358469 349 iccv-2013-Regionlets for Generic Object Detection

10 0.68327713 300 iccv-2013-Optical Flow via Locally Adaptive Fusion of Complementary Data Costs

11 0.68245023 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables

12 0.68098176 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

13 0.67999113 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

14 0.67880309 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal

15 0.67808473 208 iccv-2013-Image Co-segmentation via Consistent Functional Maps

16 0.67805517 223 iccv-2013-Joint Noise Level Estimation from Personal Photo Collections

17 0.67784739 315 iccv-2013-PhotoOCR: Reading Text in Uncontrolled Conditions

18 0.67766631 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

19 0.677652 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition

20 0.67665386 196 iccv-2013-Hierarchical Data-Driven Descent for Efficient Optimal Deformation Estimation