iccv iccv2013 iccv2013-28 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Jinwei Ye, Yu Ji, Jingyi Yu
Abstract: Traditional stereo matching assumes perspective viewing cameras under a translational motion: the second camera is translated away from the first one to create parallax. In this paper, we investigate a different, rotational stereo model on a special multi-perspective camera, the XSlit camera [9, 24]. We show that rotational XSlit (R-XSlit) stereo can be effectively created by fixing the sensor and slit locations but switching the two slits’ directions. We first derive the epipolar geometry of R-XSlit in the 4D light field ray space. Our derivation leads to a simple but effective scheme for locating corresponding epipolar “curves ”. To conduct stereo matching, we further derive a new disparity term in our model and develop a patch-based graph-cut solution. To validate our theory, we assemble an XSlit lens by using a pair of cylindrical lenses coupled with slit-shaped apertures. The XSlit lens can be mounted on commodity cameras where the slit directions are adjustable to form desirable R-XSlit pairs. We show through experiments that R-XSlitprovides apotentially advantageous imaging system for conducting fixed-location, dynamic baseline stereo.
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract Traditional stereo matching assumes perspective viewing cameras under a translational motion: the second camera is translated away from the first one to create parallax. [sent-3, score-0.544]
2 In this paper, we investigate a different, rotational stereo model on a special multi-perspective camera, the XSlit camera [9, 24]. [sent-4, score-0.421]
3 We show that rotational XSlit (R-XSlit) stereo can be effectively created by fixing the sensor and slit locations but switching the two slits’ directions. [sent-5, score-0.428]
4 We first derive the epipolar geometry of R-XSlit in the 4D light field ray space. [sent-6, score-0.309]
5 To conduct stereo matching, we further derive a new disparity term in our model and develop a patch-based graph-cut solution. [sent-8, score-0.511]
6 To validate our theory, we assemble an XSlit lens by using a pair of cylindrical lenses coupled with slit-shaped apertures. [sent-9, score-0.272]
7 The XSlit lens can be mounted on commodity cameras where the slit directions are adjustable to form desirable R-XSlit pairs. [sent-10, score-0.25]
8 Traditional approaches assume perspective viewing cameras under a translational motion: the second camera is translated away from the first one to have sufficient camera baseline for producing parallax [6]. [sent-15, score-0.37]
9 In this paper, we investigate a different, rotational stereo model. [sent-18, score-0.336]
10 Instead of translating the camera, we aim to create stereo pairs by rotating the camera, or more precisely, rays collected by the camera. [sent-19, score-0.418]
11 Left: an illustration of the rotational XSlit stereo model. [sent-21, score-0.336]
12 era around its center of projection (CoP) results in the same set of rays and does not produce stereo pairs. [sent-23, score-0.358]
13 We therefore focus on creating rotational stereo using non-pinhole or multi-perspective cameras [23]. [sent-24, score-0.391]
14 When properly configured, the resulting ray geometry is potentially amenable for stereo matching. [sent-31, score-0.4]
15 There have been significant advances on the theory of multi-perspective stereo in the past decade. [sent-32, score-0.273]
16 Seitz [16, 17] characterized all possible multi-perspective stereo pairs and concluded the epipolar geometry, ifit exists, has to be a doubly ruled surface. [sent-33, score-0.433]
17 Therefore, only a small variety of multiperspective stereo pairs exist. [sent-34, score-0.335]
18 Pajdla [10, 11, 12] independently obtained the same results and further studied stereo matching on the multi-perspective linear oblique camera. [sent-35, score-0.349]
19 Their results show that a small variety of multi-perspective stereo pairs exist. [sent-36, score-0.273]
20 In this paper, we present a practical multiperspective stereo solution based on a special class of multiperspective cameras, the XSlit camera [9, 24]. [sent-37, score-0.482]
21 [4] derived the translational XSlit stereo model: an XSlit camera can be translated along one of the two slits to form valid stereo pairs with purely horizontal parallax. [sent-40, score-0.951]
22 In this paper, we show that, instead of translating the XSlit cameras, we can form valid stereo pairs by fixing the sensor/slit locations but switching the slits’ directions. [sent-41, score-0.294]
23 We call this model rotational XSlit stereo or R-XSlit stereo. [sent-42, score-0.336]
24 Our derivation also leads to simple but effective schemes for locating corresponding epipolar “curves” and analyzing recoverable depth range and depth error. [sent-45, score-0.379]
25 For stereo matching, we further derive a new R-XSlit disparity term and develop a patch-based graph-cut solution. [sent-46, score-0.492]
26 For real scenes, we assemble an XSlit lens using a pair of cylindrical lenses coupled with slit-shaped apertures. [sent-48, score-0.272]
27 The XSlit lens can be mounted on commodity cameras where the slit direction can be changed to form an R-XSlit pair. [sent-49, score-0.25]
28 We show through experiments that R-XSlit provides a potentially advantageous stereo imaging system. [sent-50, score-0.288]
29 In particularly, it can achieve “fixed-location” stereo by rotating only the slits, hence eliminating the need of placing two cameras at different spatial locations in perspective stereo. [sent-51, score-0.467]
30 R-XSlit Stereo Model An XSlit camera collects rays that simultaneously pass through two oblique (neither parallel nor coplanar) slits in 3D space [9, 24]. [sent-53, score-0.471]
31 The ray geometry of XSlit has been previous studied using XSlit projection matrix [24], linear oblique [9], light field parametrization [22], or ray regulus [14]. [sent-54, score-0.302]
32 Specifically, we choose two planes Πuv and Πst parallel to both slits but containing neither slits. [sent-56, score-0.255]
33 Next, we orthogonally project both slits on Πuv and use their intersection point as the origin of the coordinate system. [sent-57, score-0.276]
34 , the tCw(oZ slits switch their directions as shown in Fig. [sent-72, score-0.235]
35 Although the general theory behind multi-perspective stereo is well known [4, 10, 11, 12, 17], i. [sent-82, score-0.273]
36 , only three varieties of epipolar geometry exist: planes, hyperboloids, and hyperbolic-paraboloids, effectively testing whether a pair of multi-perspective cameras form valid epipoplar geometry is still a challenging problem. [sent-84, score-0.365]
37 Our approach is to first locate potential epipolar curves on corresponding images and then determine if the two curves form valid epipolar geometry. [sent-85, score-0.415]
38 Existence To find potential epipolar curves in an R-XSlit pair P(Z1, Z2, θ), we first trace out a ray r0 [u0, v0, σ0 , τ0] from CP((ZZ1 , Z2 , 0, θ) in P. [sent-88, score-0.31]
39 sinθ · uv cosθ · v2 = sinθ · u0v0 cosθ · v02 (3) To determine if these rays form valid epipolar geometry, we carry out a ray geometry analysis. [sent-103, score-0.432]
40 The epipolar curves in an R-XSlit pair P(Z1, Z2 , θ) are sin θ · uv − cos θ cameras, w,θh)er aer κ siisn some vco −ns ctaonsθt. [sent-106, score-0.429]
41 Theorem 1 reveals that, different from the perspective stereo, the epipolar “lines” in our R-XSlit pair are hyperbolas. [sent-145, score-0.278]
42 Notice that, although our analysis focuses on R-XSlit stereo, it can also be used to prove the translational XSlit stereo condition [4]. [sent-149, score-0.302]
43 R-XSlit can also be viewed as a special case of the second XSlit condition in [4] where the four slits intersect at four distinct points. [sent-150, score-0.276]
44 In traditional perspective stereo, disparity is defined as purely horizontal parallax. [sent-157, score-0.334]
45 However, in our R-XSlit pair, corresponding pixels exhibit both vertical parallax and horizontal parallax as the epipolar curves are hyperbolas. [sent-158, score-0.281]
46 Recall that valid disparity definition should satisfy three criterion: 1) the disparity should only depend on object depth; 2) it should be a monotonic function in object depth; and 3) it can be used to locate the corresponding pixel in the second view. [sent-160, score-0.52]
47 =(Z2Z−2y z) (7) To satisfy criteria 1), the disparity should not contain x and y terms. [sent-170, score-0.236]
48 =ZZ12·zz −− Z Z21 (8) 491 It is easy to see that dXS is monotonically decreasing in z for z > Z2 and therefore satisfy disparity criteria 2). [sent-172, score-0.236]
49 Finally, to enable correspondence matching, given a pixel (up, vp) in C and its disparity dXpS w. [sent-173, score-0.247]
50 = vp · dXpS and then) apply the epipolar curve constraint w(Ehqenre. [sent-186, score-0.232]
51 ), Ienre perspective cameras, tsheθ singularity of disparity occurs when scene points lie on the line connecting the two CoPs, i. [sent-191, score-0.349]
52 (8), we observe that an R-XSlit pair has singularity at v = 0 where disparity can no longer be computed. [sent-195, score-0.288]
53 In that case, we can redefine the disparity as dXS = u/u? [sent-200, score-0.236]
54 Graph-Cut Stereo Matching To recover depth from our R-XSlit pair, we reuse the graph-cut algorithm [1, 2, 7] by modeling stereo matching as XSlit disparity labeling. [sent-208, score-0.625]
55 Once we recover the disparity map, we can compute the object depth z by inverting Eqn. [sent-223, score-0.304]
56 Different from perspective stereo, image patches in an XSlit image are distorted (sheared and stretched), where the distortion is determined by the slit position/direction and object depth [3, 24]. [sent-228, score-0.284]
57 (a) shows a perspective view of the scene and its depth map; (b) shows an R-XSlit stereo pair; (c) for robust patch matching, we first “unshear” the two images given a specific depth label and then resize them to compute similarity. [sent-232, score-0.564]
58 Specifically, when assigning a disparity label dXiS to a pixel in camera C, we first shear the patches in each XStliot vai pewixe ewl iinth a msheeraar mC,a wtriex f i(r 1ss t10 s )h, ewarhe three × × s is the shear factor. [sent-234, score-0.381]
59 For acceleration, we further pre-scale the input image pairs with different disparity labels and then fetch patches from the corresponding ones given a specific disparity label. [sent-246, score-0.438]
60 5 shows a sample stereo matching result using our approach on an R-XSlit pair P(1. [sent-250, score-0.336]
61 (a) and (b) are the input XSlit images with the ground truth disparity map shown at the left-top corner of (a); (c)-(e) are the recovered disparity maps using pixel-based (c), patch-based with distortion correction (d), and patch-based without distortion correction (e) schemes. [sent-267, score-0.671]
62 Axis-Aligned R-XSlit Stereo A special R-XSlit stereo model is when the two slits are orthogonal and axis-aligned. [sent-269, score-0.529]
63 As shown in the following sections, the R-POXSlit stereo pair has a number of advantages. [sent-273, score-0.312]
64 Finally, instead of rotating the two slits individually, we can rotate the camera by 90◦ to form an R-POXSlit pair. [sent-277, score-0.359]
65 The crossed-slit anamorphoser, credited to Ducos du Hauron, modifies pinhole camera by replacing the pinhole with a pair of narrow, perpendicularly crossed slits, spaced apart along the camera axis [19]. [sent-281, score-0.269]
66 Today a commodity camera uses spherical thin lens to emulate a pinhole camera by focusing rays passing through the lens on to a 3D point. [sent-286, score-0.475]
67 We observe that a cylindrical lens is a section of a cylinder that focuses rays passing through it onto a line parallel to the intersection of the surface of the lens and a plane tangent to it. [sent-288, score-0.367]
68 6 illustrates our prototype POXSlit camera where we mount the XSlit lens on a commodity interchangable lens camera (e. [sent-293, score-0.393]
69 We align the two cylindrical lenses orthogonally using a lens tube. [sent-296, score-0.242]
70 We, instead, mount the camera onto an indexed rotation ring and capture the scene twice by rotating the camera by 90◦ . [sent-299, score-0.254]
71 Depth Range and Error To evaluate the practicability of our R-POXSlit stereo, an important task is to measure the depth range and error in comparison with perspective stereo [5, 20]. [sent-302, score-0.437]
72 In our analysis, we assume that both stereo models has the same (1D) pixel size ? [sent-303, score-0.301]
73 The object depth z and its disparity d are correlated by z = Zf (1 + b/d). [sent-306, score-0.304]
74 To study the maximum recoverable depth and depth error in RPOXSlit stereo, we conduct a pixel-shift analysis. [sent-312, score-0.238]
75 We first consider the disparity change ΔdXS by shifting one pixel along the epipolar curve. [sent-313, score-0.407]
76 Given a pixel (u, v) in C and its disparity d eXpS,i we can cvael. [sent-314, score-0.247]
77 (10) illustrates that the depth error in R-POXSlit stereo is similar to the one in perspective case in that it is linear in ? [sent-339, score-0.437]
78 However, in perspective stereo, its minimum disparity change is identical (i. [sent-341, score-0.298]
79 In perspective stereo, the epipolar geometry is a plane on which rays form a perspective uniform lattice. [sent-349, score-0.448]
80 In contrast, the epipolar geometry in R-POXSlit stereo is a hyperboloid where the depth variation under uniform v sampling is non-linear. [sent-350, score-0.563]
81 (8), we ∞have the infinity disparity dX∞S = Z2/Z1 when z → ∞ (notice that this is different fro∞m the perspective case t h∞at dX∞S zmax as =e 0ha). [sent-358, score-0.323]
82 (12) also reveals that, same as perspective stereo, the larger the XSlit base line, the larger the maximal resolvable depth zmax and the smaller the depth error Δz. [sent-375, score-0.318]
83 While perspective stereo needs to physically separate the cameras Perspective Stereo R-POXSlit Stereo results on a synthetic scene. [sent-376, score-0.423]
84 apart for increasing the baseline, R-POXSlit stereo can fix the sensor location but separates the two slits further away. [sent-377, score-0.526]
85 Experiments We have validated our R-POXSlit stereo on both synthetic and real data. [sent-381, score-0.289]
86 We discretize the XSlit disparity rtoe toelnu iloanbe olsf f6r0o0m× 318 8. [sent-390, score-0.24]
87 3) and then apply our patch-based stereo matching with patch size 5 5. [sent-398, score-0.297]
88 7 shows our R-POXSlit recovered disparity map which is comparable to the perspective stereo result. [sent-407, score-0.592]
89 This is analogous to conducting stereo matching on perspective image pairs that are slightly misaligned. [sent-425, score-0.392]
90 The misalignment can lead to inaccurate depth maps, although the recovered disparity map still reveals meaningful scene structures. [sent-426, score-0.346]
91 In this example, we discretize the disparity label into 20 levels at range of [1. [sent-427, score-0.24]
92 (b) and (d) show their corresponding disparity maps recovered by our algorithm. [sent-459, score-0.24]
93 The background now appear nearly focused and our stereo reconstruction algorithm produces a reasonable disparity map estimation as shown Fig. [sent-466, score-0.492]
94 Discussions and Future Work We have presented a new rotational stereo model based on the XSlit camera. [sent-469, score-0.336]
95 We have shown that the corresponding epipolar “curves” are hyperbolas and we have developed a robust patch-based stereo matching algorithm to handle image distortions. [sent-472, score-0.457]
96 A special R-XSlit pair is when the two slits are orthogonal. [sent-473, score-0.295]
97 In particular, shear correction used in our stereo matching algorithm can lead to large errors on slanted planar objects. [sent-480, score-0.392]
98 In the future, we plan to integrate recently proposed XSlit shape-from-distortion technique [21] with stereo matching to robustly handle such scenes. [sent-481, score-0.297]
99 Left: one of the XSlit images acquired with slits of width 1mm. [sent-484, score-0.235]
100 A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. [sent-574, score-0.273]
wordName wordTfidf (topN-words)
[('xslit', 0.778), ('stereo', 0.273), ('slits', 0.235), ('disparity', 0.219), ('epipolar', 0.16), ('lens', 0.09), ('dxs', 0.088), ('poxslit', 0.086), ('rays', 0.085), ('depth', 0.085), ('sin', 0.084), ('ray', 0.082), ('perspective', 0.079), ('cos', 0.078), ('slit', 0.074), ('cylindrical', 0.066), ('camera', 0.064), ('rotational', 0.063), ('dxis', 0.062), ('multiperspective', 0.062), ('lenses', 0.061), ('correction', 0.06), ('rotating', 0.06), ('cameras', 0.055), ('vp', 0.053), ('oblique', 0.052), ('pinhole', 0.051), ('bxs', 0.049), ('recoverable', 0.049), ('distortion', 0.046), ('geometry', 0.045), ('resolvable', 0.044), ('uv', 0.039), ('pair', 0.039), ('zf', 0.038), ('dixs', 0.037), ('rpoxslit', 0.037), ('parallax', 0.036), ('shear', 0.035), ('cop', 0.033), ('prototype', 0.031), ('commodity', 0.031), ('feldman', 0.03), ('singularity', 0.03), ('translational', 0.029), ('curves', 0.029), ('pixel', 0.028), ('anamorphic', 0.025), ('anamorphoser', 0.025), ('dxps', 0.025), ('orthogonally', 0.025), ('zmax', 0.025), ('zzii', 0.025), ('matching', 0.024), ('reuse', 0.024), ('baseline', 0.023), ('mount', 0.023), ('vv', 0.023), ('light', 0.022), ('ring', 0.022), ('defocus', 0.022), ('csoins', 0.022), ('scions', 0.022), ('rendered', 0.022), ('conference', 0.021), ('valid', 0.021), ('recovered', 0.021), ('scene', 0.021), ('special', 0.021), ('discretize', 0.021), ('resize', 0.021), ('horizontal', 0.02), ('appendix', 0.02), ('translated', 0.02), ('parallel', 0.02), ('intersect', 0.02), ('curve', 0.019), ('conduct', 0.019), ('parametrization', 0.019), ('seitz', 0.019), ('narrower', 0.019), ('sensor', 0.018), ('redefine', 0.017), ('pajdla', 0.017), ('stretched', 0.017), ('satisfy', 0.017), ('blur', 0.017), ('international', 0.016), ('ye', 0.016), ('intersection', 0.016), ('assemble', 0.016), ('blurs', 0.016), ('synthetic', 0.016), ('locate', 0.016), ('conducting', 0.016), ('distortions', 0.016), ('purely', 0.016), ('collects', 0.015), ('imaging', 0.015), ('intelligence', 0.015)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999988 28 iccv-2013-A Rotational Stereo Model Based on XSlit Imaging
Author: Jinwei Ye, Yu Ji, Jingyi Yu
Abstract: Traditional stereo matching assumes perspective viewing cameras under a translational motion: the second camera is translated away from the first one to create parallax. In this paper, we investigate a different, rotational stereo model on a special multi-perspective camera, the XSlit camera [9, 24]. We show that rotational XSlit (R-XSlit) stereo can be effectively created by fixing the sensor and slit locations but switching the two slits’ directions. We first derive the epipolar geometry of R-XSlit in the 4D light field ray space. Our derivation leads to a simple but effective scheme for locating corresponding epipolar “curves ”. To conduct stereo matching, we further derive a new disparity term in our model and develop a patch-based graph-cut solution. To validate our theory, we assemble an XSlit lens by using a pair of cylindrical lenses coupled with slit-shaped apertures. The XSlit lens can be mounted on commodity cameras where the slit directions are adjustable to form desirable R-XSlit pairs. We show through experiments that R-XSlitprovides apotentially advantageous imaging system for conducting fixed-location, dynamic baseline stereo.
2 0.24717206 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera
Author: Jakob Engel, Jürgen Sturm, Daniel Cremers
Abstract: We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking which does not depend on visual features while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is oflargepractical valuefor robotics and augmented reality applications. – – 1. Towards Dense Monocular Visual Odometry Tracking a hand-held camera and recovering the threedimensional structure of the environment in real-time is among the most prominent challenges in computer vision. In the last years, dense approaches to these challenges have become increasingly popular: Instead of operating solely on visual feature positions, they reconstruct and track on the whole image using a surface-based map and thereby are fundamentally different from feature-based approaches. Yet, these methods are to date either not real-time capable on standard CPUs [11, 15, 17] or require direct depth measurements from the sensor [7], making them unsuitable for many practical applications. In this paper, we propose a novel semi-dense visual odometry approach for a monocular camera, which combines the accuracy and robustness of dense approaches with the efficiency of feature-based methods. Further, it computes highly accurate semi-dense depth maps from the monocular images, providing rich information about the 3D ∗ This work was supported by the ERC Starting Grant ConvexVision and the DFG project Mapping on Demand Figure1.Semi-Dens MoncularVisualOdometry:Oucfrloas rpe- proach works on a semi-dense inverse depth map and combines the accuracy and robustness of dense visual SLAM methods with the efficiency of feature-based techniques. Left: video frame, Right: color-coded semi-dense depth map, which consists of depth estimates in all image regions with sufficient structure. structure of the environment. We use the term visual odometry as supposed to SLAM, as for simplicity we deliberately maintain only information about the currently visible scene, instead of building a global world-model. – – 1.1. Related Work Feature-based monocular SLAM. In all feature-based methods (such as [4, 8]), tracking and mapping consists of two separate steps: First, discrete feature observations (i.e., their locations in the image) are extracted and matched to each other. Second, the camera and the full feature poses are calculated from a set of such observations disregarding the images themselves. While this preliminary abstrac– tion step greatly reduces the complexity of the overall problem and allows it to be tackled in real time, it inherently comes with two significant drawbacks: First, only image information conforming to the respective feature type and parametrization typically image corners and blobs [6] or line segments [9] is utilized. Second, features have to be matched to each other, which often requires the costly computation of scale- and rotation-invariant descriptors and robust outlier estimation methods like RANSAC. – – Dense monocular SLAM. To overcome these limitations and to better exploit the available image information, dense monocular SLAM methods [11, 17] have recently been proposed. The fundamental difference to keypoint-based approaches is that these methods directly work on the images 11444499 instead of a set of extracted features, for both mapping and tracking: The world is modeled as dense surface while in turn new frames are tracked using whole-image alignment. This concept removes the need for discrete features, and allows to exploit all information present in the image, increasing tracking accuracy and robustness. To date however, doing this in real-time is only possible using modern, powerful GPU processors. Similar methods are broadly used in combination with RGB-D cameras [7], which directly measure the depth of each pixel, or stereo camera rigs [3] greatly reducing the – complexity of the problem. Dense multi-view stereo. Significant prior work exists on multi-view dense reconstruction, both in a real-time setting [13, 11, 15], as well as off-line [5, 14]. In particular for offline reconstruction, there is a long history of using different baselines to steer the stereo-inherent trade-off between accuracy and precision [12]. Most similar to our approach is the early work of Matthies et al., who proposed probabilistic depth map fusion and propagation for image sequences [10], however only for structure from motion, i.e., not coupled with subsequent dense tracking. 1.2. Contributions In this paper, we propose a novel semi-dense approach to monocular visual odometry, which does not require feature points. The key concepts are • a probabilistic depth map representation, • tracking based on whole-image alignment, • the reduction on image-regions which carry informattihoen (esdeumctii-odenn osen), i manadg • the full incorporation of stereo measurement uncertainty. To the best of our knowledge, this is the first featureless, real-time monocular visual odometry approach, which runs in real-time on a CPU. 1.3. Method Outline Our approach is partially motivated by the basic principle that for most real-time applications, video information is abundant and cheap to come by. Therefore, the computational budget should be spent such that the expected information gain is maximized. Instead of reducing the images to a sparse set of feature observations however, our method continuously estimates a semi-dense inverse depth map for the current frame, i.e., a dense depth map covering all image regions with non-negligible gradient (see Fig. 2). It is comprised of one inverse depth hypothesis per pixel modeled by a Gaussian probability distribution. This representation still allows to use whole-image alignment [7] to track new orignalimagesemi-densedepthmap(ours)clfoasre keypointdepthmap[8]densedepthmap[1 ]RGB-Dcamera[16] Figure 2. Semi-Dense Approach: Our approach reconstructs and tracks on a semi-dense inverse depth map, which is dense in all image regions carrying information (top-right). For comparison, the bottom row shows the respective result from a keypoint-based approach, a fully dense approach and the ground truth from an RGB-D camera. frames, while at the same time greatly reducing computational complexity compared to volumetric methods. The estimated depth map is propagated from frame to frame, and updated with variable-baseline stereo comparisons. We explicitly use prior knowledge about a pixel’s depth to select a suitable reference frame on a per-pixel basis, and to limit the disparity search range. The remainder of this paper is organized as follows: Section 2 describes the semi-dense mapping part of the proposed method, including the derivation of the observation accuracy as well as the probabilistic data fusion, propagation and regularization steps. Section 3 describes how new frames are tracked using whole-image alignment, and Sec. 4 summarizes the complete visual odometry method. A qualitative as well as a quantitative evaluation is presented in Sec. 5. We then give a brief conclusion in Sec. 6. 2. Semi-Dense Depth Map Estimation One of the key ideas proposed in this paper is to estimate a semi-dense inverse depth map for the current camera image, which in turn can be used for estimating the camera pose of the next frame. This depth map is continuously propagated from frame to frame, and refined with new stereo depth measurements, which are obtained by performing per-pixel, adaptive-baseline stereo comparisons. This allows us to accurately estimate the depth both of close-by and far-away image regions. In contrast to previous work that accumulates the photometric cost over a sequence of several frames [11, 15], we keep exactly one inverse depth hypothesis per pixel that we represent as Gaussian probability distribution. This section is comprised of three main parts: Sec11445500 reference small baseline medium baseline large baseline tcso0120 .050.10.150.20.2sl5m areagdleiulm0.3 inverse depth d Figure 3. Variable Baseline Stereo: Reference image (left), three stereo images at different baselines (right), and the respective matching cost functions. While a small baseline (black) gives a unique, but imprecise minimum, a large baseline (red) allows for a very precise estimate, but has many false minima. tion 2. 1 describes the stereo method used to extract new depth measurements from previous frames, and how they are incorporated into the prior depth map. In Sec. 2.2, we describe how the depth map is propagated from frame to frame. In Sec. 2.3, we detail how we partially regularize the obtained depth map in each iteration, and how outliers are handled. Throughout this section, d denotes the inverse depth of a pixel. 2.1. Stereo-Based Depth Map Update It is well known [12] that for stereo, there is a trade-off between precision and accuracy (see Fig. 3). While many multiple-baseline stereo approaches resolve this by accumulating the respective cost functions over many frames [5, 13], we propose a probabilistic approach which explicitly takes advantage of the fact that in a video, smallbaseline frames are available before large-baseline frames. The full depth map update (performed once for each new frame) consists of the following steps: First, a subset of pixels is selected for which the accuracy of a disparity search is sufficiently large. For this we use three intuitive and very efficiently computable criteria, which will be derived in Sec. 2. 1.3. For each selected pixel, we then individually select a suitable reference frame, and perform a onedimensional disparity search. Propagated prior knowledge is used to reduce the disparity search range when possible, decreasing computational cost and eliminating false minima. The obtained inverse depth estimate is then fused into the depth map. 2.1.1 Reference Frame Selection Ideally, the reference frame is chosen such that it maximizes the stereo accuracy, while keeping the disparity search range as well as the observation angle sufficiently cur ent framepixel’s “age” -4.8 s -3.9 s -3.1 s -2.2 s -1.2 s -0.8 s -0.5 s -0.4 s Figure 4. Adaptive Baseline Selection: For each pixel in the new frame (top left), a different stereo-reference frame is selected, based on how long the pixel was visible (top right: the more yellow, the older the pixel.). Some of the reference frames are displayed below, the red regions were used for stereo comparisons. small. As the stereo accuracy depends on many factors and because this selection is done for each pixel independently, we employ the following heuristic: We use the oldest frame the pixel was observed in, where the disparity search range and the observation angle do not exceed a certain threshold (see Fig. 4). If a disparity search is unsuccessful (i.e., no good match is found), the pixel’s “age” is increased, such that subsequent disparity searches use newer frames where the pixel is likely to be still visible. 2.1.2 Stereo Matching Method We perform an exhaustive search for the pixel’s intensity along the epipolar line in the selected reference frame, and then perform a sub-pixel accurate localization of the matching disparity. If a prior inverse depth hypothesis is available, the search interval is limited by d 2σd, where d and σd de,e nthoete s etharec mean avnadl ssta lnimdaiterdd d beyv dia ±tion 2σ σof the prior hypothesis. Otherwise, the full disparity range is searched. In our implementation, we use the SSD error over five equidistant points on the epipolar line: While this significantly increases robustness in high-frequent image regions, it does not change the purely one-dimensional nature of this search. Furthermore, it is computationally efficient, as 4 out ± of 5 interpolated image values can be re-used for each SSD evaluation. 2.1.3 Uncertainty Estimation In this section, we use uncertainty propagation to derive an expression for the error variance σd2 on the inverse depth d. 11445511 In general this can be done by expressing the optimal inverse depth d∗ as a function of the noisy inputs here we consider the images I0, I1 themselves, their relative orientation ξ and the camera calibration in terms of a projection function π1 – d∗ = d(I0, I1, ξ, π) . The error-variance of d∗ is then given by σd2 = JdΣJdT, (1) (2) where Jd is the Jacobian of d, and Σ the covariance of the input-error. For more details on covariance propagation, including the derivation of this formula, we refer to [2]. For simplicity, the following analysis is performed for patchfree stereo, i.e., we consider only a point-wise search for a single intensity value along the epipolar line. For this analysis, we split the computation into three steps: First, the epipolar line in the reference frame is computed. Second, the best matching position λ∗ ∈ R along it (i.e., the disparity) is determined. Third, the i∈nv eRrse al depth d∗ is computed from the disparity λ∗ . The first two steps involve two independent error sources: the geometric error, which originates from noise on ξ and π and affects the first step, and the photometric error, which originates from noise in the images I0, I1 and affects the second step. The third step scales these errors by a factor, which depends on the baseline. Geometric disparity error. The geometric error is the error ?λ on the disparity λ∗ caused by noise on ξ and π. While it would be possible to model, propagate, and estimate the complete covariance on ξ and π, we found that the gain in accuracy does not justify the increase in computational complexity. We therefore use an intuitive approximation: Let the considered epipolar line segment L ⊂ R2 be deLfineted th by L := ?l0 + λ?llyx? |λ ∈ S? , (3) where λ is the disparity with search interval S, (lx , ly)T the normalized epipolar line direction and l0 the point corresponding to infinite depth. We now assume that only the absolute position of this line segment, i.e., l0 is subject to isotropic Gaussian noise ?l . As in practice we keep the searched epipolar line segments short, the influence of rotational error is small, making this a good approximation. Intuitively, a positioning error ?l on the epipolar line causes a small disparity error ?λ if the epipolar line is parallel to the image gradient, and a large one otherwise (see Fig. 5). This can be mathematically derived as follows: The image constrains the optimal disparity λ∗ to lie on a certain isocurve, i.e. a curve of equal intensity. We approximate 1In the linear case, this is the camera matrix K – in practice however, nonlinear distortion and other (unmodeled) effects also play a role. FiguLre5.Geo?l mλetricDigs,palrityEroL?rl:Influe?nλceofgasmla posi- tioning error ?l of the epipolar line on the disparity error ?λ . The dashed line represents the isocurve on which the matching point has to lie. ?λ is small if the epipolar line is parallel to the image gradient (left), and a large otherwise (right). this isocurve to be locally linear, i.e. the gradient direction to be locally constant. This gives l0 + λ∗ ?llxy? =! + γ?−gxgy?, g0 γ ∈ R (4) where g := (gx , gy) ?is the image gradient and g0 a point on the isoline. The influence of noise on the image values will be derived in the next paragraph, hence at this point g and g0 are assumed noise-free. Solving for λ gives the optimal disparity λ∗ in terms of the noisy input l0: λ∗(l0) =?g,g?g0,−l? l0? (5) Analogously to (2), the variance of the geometric disparity error can then be expressed as σλ2(ξ,π)= Jλ∗(l0)?σ0l2 σ0l2?JλT∗(l0)=?gσ,l 2?2, (6) where g is the normalized image gradient, lthe normalized epipolar line direction and σl2 the variance of ?l. Note that this error term solely originates from noise on the relative camera orientation and the camera calibration π, i.e., it is independent of image intensity noise. ξ Photometric disparity error. Intuitively, this error encodes that small image intensity errors have a large effect on the estimated disparity if the image gradient is small, and a small effect otherwise (see Fig. 6). Mathematically, this relation can be derived as follows. We seek the disparity λ∗ that minimizes the difference in intensities, i.e., λ∗ = mλin (iref − Ip(λ))2, (7) where iref is the reference intensity, and Ip(λ) the image intensity on the epipolar line at disparity λ. We assume a good initialization λ0 to be available from the exhaustive search. Using a first-order Taylor approximation for Ip gives λ∗(I) = λ0 + (iref − Ip(λ0)) g−p1, (8) where gp is the gradient of Ip, that is image gradient along the epipolar line. For clarity we only consider noise on iref and Ip(λ0) ; equivalent results are obtained in the general case when taking into account noise on the image values involved in the computation of gp. The variance of the pho11445522 ?i Ip?λ ?iiIp?λλ Figure 6. Photometric Disparity Error: Noise ?i on the image intensity values causes a small disparity error ?λ if the image gradient along the epipolar line is large (left). If the gradient is small, the disparity error is magnified (right). tometric disparity error is given by σλ2(I) = Jλ∗(I)?σ0i2 σ0i2?Jλ∗(I) =2gσ2pi2, (9) where σi2 is the variance of the image intensity noise. The respective error originates solely from noisy image intensity values, and hence is independent of the geometric disparity error. Pixel to inverse depth conversion. Using that, for small camera rotation, the inverse depth d is approximately proportional to the disparity λ, the observation variance of the inverse depth σd2,obs can be calculated using σd2,obs = α2 ?σ2λ(ξ,π) + σλ2(I)? , (10) where the proportionality ?constant α in th?e general, nonrectified case – is different for each pixel, and can be calculated from – α :=δδdλ, (11) where δd is the length of the searched inverse depth interval, and δλ the length of the searched epipolar line segment. While α is inversely linear in the length of the camera translation, it also depends on the translation direction and the pixel’s location in the image. When using an SSD error over multiple points along the epipolar line – as our implementation does – a good upper bound for the matching uncertainty is then given by ?min{σ2λ(ξ,π)} + min{σλ2(I)}? σd2,obs-SSD ≤ α2 , (12) where the min goes over all points included in the? SSD error. 2.1.4 Depth Observation Fusion After a depth observation for a pixel in the current image has been obtained, we integrate it into the depth map as follows: If no prior hypothesis for a pixel exists, we initialize it directly with the observation. Otherwise, the new observation is incorporated into the prior, i.e., the two distribu- tions are multiplied (corresponding to the update step in a Knoailsmya onb fsieltrvera)t:io Gniv Nen(do a, pσrio2o)r, d thiest priobsutetiroionr N is( gdipv,eσnp2 b)y and a N?σ2pdσo2p++ σ σo2o2dp,σ2σpp2+σo2 σo2?. 2.1.5 (13) Summary of Uncertainty-Aware Stereo New stereo observations are obtained on a per-pixel basis, adaptively selecting for each pixel a suitable reference frame and performing a one-dimensional search along the epipolar line. We identified the three major factors which determine the accuracy of such a stereo observation, i.e., • the photometric disparity error σλ2(ξ,π), depending on tphheo magnitude sofp trhiet image gradient along the epipolar line, • the geometric disparity error σλ2(I) ,depending on the athnegl gee bometewtereinc dthisep image gradient and the epipolar line (independent of the gradient magnitude), and • the pixel to inverse depth ratio α, depending on the camera etlra tons ilantvioenrs, eth dee pfothcal r length ,a dndep tehned pixel’s position. These three simple-to-compute and purely local criteria are used to determine for which pixel a stereo update is worth the computational cost. Further, the computed observation variance is then used to integrate the new measurements into the existing depth map. 2.2. Depth Map Propagation We continuously propagate the estimated inverse depth map from frame to frame, once the camera position of the next frame has been estimated. Based on the inverse depth estimate d0 for a pixel, the corresponding 3D point is calculated and projected into the new frame, providing an inverse depth estimate d1 in the new frame. The hypothesis is then assigned to the closest integer pixel position to eliminate discretization errors, the sub-pixel accurate image location of the projected point is kept, and re-used for the next propagation step. For propagating the inverse depth variance, we assume the camera rotation to be small. The new inverse depth d1 can then be approximated by – d1(d0) = (d0−1 − tz)−1, (14) where tz is the camera translation along the optical axis. The variance of d1 is hence given by σd21= Jd1σd20JTd1+ σp2=?dd01?4σd20+ σp2, (15) where σp2 is the prediction uncertainty, which directly corresponds to the prediction step in an extended Kalman filter. It can also be interpreted as keeping the variance on 11445533 in the top right shows the new frame I2 (x) without depth information. Middle: Intermediate steps while minimizing E(ξ) on different pyramid levels. The top row shows the back-warped new frame I2 (w(x, d, ξ)), the bottom row shows the respective residual image I2 (w(x, di,ξ)) − I1 (x) . The bottom right image shows the final pixel-weights (black = small weight). Small weights mainly correspond to newly oc,cξl)ud)e −d or disoccluded pixel. tWhe z fo-cuonodrtd hina t uesi onfg a sm poailnlt v failxue ds, fo i.re. σ,p2 sedteticnrgea σsez2s0 d=rift σ,z2 a1s. it causes the estimated geometry to gradually ”lock” into place. Collision handling. At all times, we allow at most one inverse depth hypothesis per pixel: If two inverse depth hypothesis are propagated to the same pixel in the new frame, we distinguish between two cases: 1. if they are statistically similar, i.e., lie within 2σ bounds, they are treated as two independent observations of the pixel’s depth and fused according to (13). 2. otherwise, the point that is further away from the camera is assumed to be occluded, and is removed. 2.3. Depth Map Regularization For each frame – after all observations have been incorporated – we perform one regularization iteration by assign- ing each inverse depth value the average of the surrounding inverse depths, weighted by their respective inverse variance. To preserve sharp edges, if two adjacent inverse depth values are statistically different, i.e., are further away than 2σ, they do not contribute to one another. Note that the respective variances are not changed during regularization to account for the high correlation between neighboring hypotheses. Instead we use the minimal variance of all neighboring pixel when defining the stereo search range, and as a weighting factor for tracking (see Sec. 3). Outlier removal. To handle outliers, we continuously keep track of the validity of each inverse depth hypothesis in terms of the probability that it is an outlier, or has become invalid (e.g., due to occlusion or a moving object). For each successful stereo observation, this probability is decreased. It is increased for each failed stereo search, if the respective intensity changes significantly on propagation, or when the absolute image gradient falls below a given threshold. If, during regularization, the probability that all contributing neighbors are outliers i.e., the product of their individual outlier-probabilities rises above a given threshold, the hypothesis is removed. Equally, if for an “empty” pixel this product drops below a given threshold, a new hypothesis is created from the neighbors. This fills holes arising from the forward-warping nature of the propagation step, and dilates the semi-dense depth map to a small neighborhood around sharp image intensity edges, which signifi– – × cantly increases tracking and mapping robustness. 3. Dense Tracking Based on the inverse depth map of the previous frame, we estimate the camera pose of the current frame using dense image alignment. Such methods have previously been applied successfully (in real-time on a CPU) for tracking RGB-D cameras [7], which directly provide dense depth measurements along with the color image. It is based on the direct minimization of the photometric error ri (ξ) := (I2 (w(xi, di , ξ)) − I1 , (16) where the warp function w : Ω1 R R6 → Ω2 maps each point xi ∈ Ω1 in the reference× image RI1 →to Ωthe respective point w(x∈i, Ωdi, ξ) ∈ Ω2 in the new image I2. As input it requires the 3D,ξ pose Ωof the camera ξ ∈ R6 and uses the reestqiumiraetesd t hienv 3erDse p depth fd it ∈e cRa mfore rthae ξ pixel in I1. Note that no depth information with respect t toh Ie2 p i sx required. To increase robustness to self-occlusion and moving objects, we apply a weighting scheme as proposed in [7]. Further, we add the variance of the inverse depth σd2i as an additional weighting term, making the tracking resistant to recently initialized and still inaccurate depth estimates from 11445544 (xi))2 Figure 8. Examples: Top: Camera images overlaid with the respective stimated semi-dense inverse depth map. Bot om: 3D view of tracked scene. Note the versatility of our approach: It accurately reconstructs and tracks through (outside) scenes with a large depth- variance, including far-away objects like clouds , as well as (indoor) scenes with little structure and close to no image corners / keypoints. More examples are shown in the attached video. the mapping process. The final energy that is minimized is hence given by E(ξ) :=?iα(rσid2(iξ))ri(ξ), (17) where α : R → R defines the weight for a given residual. Minimizing t h→is error can b thee interpreted as computing uthale. maximum likelihood estimator for ξ, assuming independent noise on the image intensity values. The resulting weighted least-squares problem is solved efficiently using an iteratively reweighted Gauss-Newton algorithm coupled with a coarse-to-fine approach, using four pyramid levels. Figure 7 shows an example of the tracking process. For further details on the minimization we refer to [1]. 4. System Overview Tracking and depth estimation is split into two separate threads: One continuously propagates the inverse depth map to the most recent tracked frame, updates it with stereocomparisons and partially regularizes it. The other simultaneously tracks each incoming frame on the most recent available depth map. While tracking is performed in real- time at 30Hz, one complete mapping iteration takes longer and is hence done at roughly 15Hz if the map is heavily populated, we adaptively reduce the number of stereo comparisons to maintain a constant frame-rate. For stereo observations, a buffer of up to 100 past frames is kept, automatically removing those that are used least. We use a standard, keypoint-based method to obtain the relative camera pose between two initial frames, which are then used to initialize the inverse depth map needed for tracking successive frames. From this point onward, our method is entirely self-contained. In preliminary experiments, we found that in most cases our approach is even able to recover from random or extremely inaccurate initial depth maps, indicating that the keypoint-based initialization might become superfluous in the future. Table 1. Results on RGB-D Benchmark position drift (cm/s) rotation drift (deg/s) ours [7] [8] ours [7] [8] – fr2/xyz fr2/desk 0.6 2.1 0.6 2.0 8.2 - 0.33 0.65 0.34 0.70 3.27 - 5. Results We have tested our approach on both publicly available benchmark sequences, as well as live, using a hand-held camera. Some examples are shown in Fig. 8. Note that our method does not attempt to build a global map, i.e., once a point leaves the field of view of the camera or becomes occluded, the respective depth value is deleted. All experiments are performed on a standard consumer laptop with Intel i7 quad-core CPU. In a preprocessing step, we rectify all images such that a pinhole camera-model can be applied. 5.1. RGB-D Benchmark Sequences As basis for a quantitative evaluation and to facilitate reproducibility and easy comparison with other methods, we use the TUM RGB-D benchmark [16]. For tracking and mapping we only use the gray-scale images; for the very first frame however the provided depth image is used as initialization. Our method (like any monocular visual odometry method) fails in case of pure camera rotation, as the depth of new regions cannot be determined. The achieved tracking accuracy for two feasible sequences that is, sequences which do not contain strong camera rotation without simultaneous translation is given in Table 1. For comparison we also list the accuracy from (1) a state-of-the-art, dense RGB-D odometry [7], and (2) a state-of-the-art, keypointbased monocular SLAM system (PTAM, [8]). We initialize PTAM using the built-in stereo initializer, and perform a 7DoF (rigid body plus scale) alignment to the ground truth trajectory. Figure 9 shows the tracked camera trajectory for fr2/desk. We found that our method achieves similar accu– – 11445555 era the the the trajectory (black), the depth map of the first frame (blue), and estimated depth map (gray-scale) after a complete loop around table. Note how well certain details such as the keyboard and monitor align. racy as [7] which uses the same dense tracking algorithm but relies on the Kinect depth images. The keypoint-based approach [8] proves to be significantly less accurate and robust; it consistently failed after a few seconds for the second sequence. 5.2. Additional Test Sequences To analyze our approach in more detail, we recorded additional challenging sequences with the corresponding ground truth trajectory in a motion capture studio. Figure 10 shows an extract from the video, as well as the tracked and the ground-truth camera position over time. As can be seen from the figure, our approach is able to maintain a reasonably dense depth map at all times and the estimated camera trajectory matches closely the ground truth. 6. Conclusion In this paper we proposed a novel visual odometry method for a monocular camera, which does not require discrete features. In contrast to previous work on dense tracking and mapping, our approach is based on probabilistic depth map estimation and fusion over time. Depth measurements are obtained from patch-free stereo matching in different reference frames at a suitable baseline, which are selected on a per-pixel basis. To our knowledge, this is the first featureless monocular visual odometry method which runs in real-time on a CPU. In our experiments, we showed that the tracking performance of our approach is comparable to that of fully dense methods without requiring a depth sensor. References [1] S. Baker and I. Matthews. Lucas-Kanade 20 years on: A unifying framework. Technical report, Carnegie Mellon Univ., 2002. 7 [2] A. Clifford. Multivariate Error Analysis. John Wiley & Sons, 1973. 4 sionpito[m ]− 024 2 0 s1xzy0s20s30s40s50s60s Figure 10. Additional Sequence: Estimated camera trajectory and ground truth (dashed) for a long and challenging sequence. The complete sequence is shown in the attached video. [3] A. Comport, E. Malis, and P. Rives. Accurate quadri-focal tracking for robust 3d visual odometry. In ICRA, 2007. 2 [4] A. Davison, I. Reid, N. Molton, and O. Stasse. MonoSLAM: Real-time single camera SLAM. Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 29, 2007. 1 [5] D. Gallup, J. Frahm, P. Mordohai, and M. Pollefeys. Variable baseline/resolution stereo. In CVPR, 2008. 2, 3 [6] C. Harris and M. Stephens. A combined corner and edge detector. In Alvey Vision Conference, 1988. 1 [7] C. Kerl, J. Sturm, and D. Cremers. Robust odometry estimation for RGB-D cameras. In ICRA, 2013. 1, 2, 6, 7, 8 [8] G. Klein and D. Murray. Parallel tracking and mapping for small AR workspaces. In Mixed and Augmented Reality (ISMAR), 2007. 1, 2, 7, 8 [9] G. Klein and D. Murray. Improving the agility of keyframebased SLAM. In ECCV, 2008. 1 [10] M. Pollefes et al. Detailed real-time urban 3d reconstruction from video. IJCV, 78(2-3): 143–167, 2008. 2, 3 [11] L. Matthies, R. Szeliski, and T. Kanade. Incremental estimation of dense depth maps from image image sequences. In CVPR, 1988. 2 [12] R. Newcombe, S. Lovegrove, and A. Davison. DTAM: Dense tracking and mapping in real-time. In ICCV, 2011. 1, 2 [13] M. Okutomi and T. Kanade. A multiple-baseline stereo. Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 15(4):353–363, 1993. 2, 3 [14] T. Sato, M. Kanbara, N. Yokoya, and H. Takemura. Dense 3-d reconstruction of an outdoor scene by hundreds-baseline stereo using a hand-held camera. IJCV, 47: 1–3, 2002. 2 [15] J. Stuehmer, S. Gumhold, and D. Cremers. Real-time dense geometry from a handheld camera. In Pattern Recognition (DAGM), 2010. 1, 2 [16] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A benchmark for the evaluation of RGB-D SLAM systems. In Intelligent Robot Systems (IROS), 2012. 2, 7 [17] A. Wendel, M. Maurer, G. Graber, T. Pock, and H. Bischof. Dense reconstruction on-the-fly. In ECCV, 2012. 1 11445566
3 0.20888451 304 iccv-2013-PM-Huber: PatchMatch with Huber Regularization for Stereo Matching
Author: Philipp Heise, Sebastian Klose, Brian Jensen, Alois Knoll
Abstract: Most stereo correspondence algorithms match support windows at integer-valued disparities and assume a constant disparity value within the support window. The recently proposed PatchMatch stereo algorithm [7] overcomes this limitation of previous algorithms by directly estimating planes. This work presents a method that integrates the PatchMatch stereo algorithm into a variational smoothing formulation using quadratic relaxation. The resulting algorithm allows the explicit regularization of the disparity and normal gradients using the estimated plane parameters. Evaluation of our method in the Middlebury benchmark shows that our method outperforms the traditional integer-valued disparity strategy as well as the original algorithm and its variants in sub-pixel accurate disparity estimation.
4 0.19849543 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies
Author: Karteek Alahari, Guillaume Seguin, Josef Sivic, Ivan Laptev
Abstract: We seek to obtain a pixel-wise segmentation and pose estimation of multiple people in a stereoscopic video. This involves challenges such as dealing with unconstrained stereoscopic video, non-stationary cameras, and complex indoor and outdoor dynamic scenes. The contributions of our work are two-fold: First, we develop a segmentation model incorporating person detection, pose estimation, as well as colour, motion, and disparity cues. Our new model explicitly represents depth ordering and occlusion. Second, we introduce a stereoscopic dataset with frames extracted from feature-length movies “StreetDance 3D ” and “Pina ”. The dataset contains 2727 realistic stereo pairs and includes annotation of human poses, person bounding boxes, and pixel-wise segmentations for hundreds of people. The dataset is composed of indoor and outdoor scenes depicting multiple people with frequent occlusions. We demonstrate results on our new challenging dataset, as well as on the H2view dataset from (Sheasby et al. ACCV 2012).
5 0.1502254 252 iccv-2013-Line Assisted Light Field Triangulation and Stereo Matching
Author: Zhan Yu, Xinqing Guo, Haibing Lin, Andrew Lumsdaine, Jingyi Yu
Abstract: Light fields are image-based representations that use densely sampled rays as a scene description. In this paper, we explore geometric structures of 3D lines in ray space for improving light field triangulation and stereo matching. The triangulation problem aims to fill in the ray space with continuous and non-overlapping simplices anchored at sampled points (rays). Such a triangulation provides a piecewise-linear interpolant useful for light field superresolution. We show that the light field space is largely bilinear due to 3D line segments in the scene, and direct triangulation of these bilinear subspaces leads to large errors. We instead present a simple but effective algorithm to first map bilinear subspaces to line constraints and then apply Constrained Delaunay Triangulation (CDT). Based on our analysis, we further develop a novel line-assisted graphcut (LAGC) algorithm that effectively encodes 3D line constraints into light field stereo matching. Experiments on synthetic and real data show that both our triangulation and LAGC algorithms outperform state-of-the-art solutions in accuracy and visual quality.
6 0.14260216 317 iccv-2013-Piecewise Rigid Scene Flow
7 0.11754595 108 iccv-2013-Depth from Combining Defocus and Correspondence Using Light-Field Cameras
8 0.097147696 255 iccv-2013-Local Signal Equalization for Correspondence Matching
9 0.088139877 88 iccv-2013-Constant Time Weighted Median Filtering for Stereo Matching and Beyond
10 0.073864967 209 iccv-2013-Image Guided Depth Upsampling Using Anisotropic Total Generalized Variation
11 0.072963826 387 iccv-2013-Shape Anchors for Data-Driven Multi-view Reconstruction
12 0.071833901 363 iccv-2013-Rolling Shutter Stereo
13 0.07147935 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction
14 0.070711166 284 iccv-2013-Multiview Photometric Stereo Using Planar Mesh Parameterization
15 0.070503421 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones
16 0.069786415 115 iccv-2013-Direct Optimization of Frame-to-Frame Rotation
17 0.069273666 199 iccv-2013-High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination
18 0.068576381 12 iccv-2013-A General Dense Image Matching Framework Combining Direct and Feature-Based Costs
19 0.066798598 348 iccv-2013-Refractive Structure-from-Motion on Underwater Images
20 0.0663969 18 iccv-2013-A Joint Intensity and Depth Co-sparse Analysis Model for Depth Map Super-resolution
topicId topicWeight
[(0, 0.105), (1, -0.165), (2, -0.043), (3, 0.048), (4, -0.015), (5, 0.006), (6, 0.007), (7, -0.13), (8, 0.005), (9, 0.017), (10, 0.02), (11, 0.004), (12, 0.006), (13, 0.004), (14, -0.02), (15, -0.074), (16, -0.083), (17, -0.037), (18, 0.031), (19, 0.115), (20, 0.011), (21, 0.047), (22, -0.187), (23, 0.048), (24, 0.108), (25, 0.011), (26, -0.046), (27, -0.133), (28, -0.011), (29, 0.133), (30, -0.028), (31, 0.04), (32, -0.081), (33, -0.14), (34, 0.135), (35, 0.015), (36, -0.116), (37, -0.049), (38, 0.01), (39, -0.02), (40, 0.001), (41, 0.046), (42, -0.025), (43, -0.041), (44, -0.008), (45, -0.025), (46, 0.014), (47, -0.02), (48, 0.038), (49, 0.015)]
simIndex simValue paperId paperTitle
same-paper 1 0.94884968 28 iccv-2013-A Rotational Stereo Model Based on XSlit Imaging
Author: Jinwei Ye, Yu Ji, Jingyi Yu
Abstract: Traditional stereo matching assumes perspective viewing cameras under a translational motion: the second camera is translated away from the first one to create parallax. In this paper, we investigate a different, rotational stereo model on a special multi-perspective camera, the XSlit camera [9, 24]. We show that rotational XSlit (R-XSlit) stereo can be effectively created by fixing the sensor and slit locations but switching the two slits’ directions. We first derive the epipolar geometry of R-XSlit in the 4D light field ray space. Our derivation leads to a simple but effective scheme for locating corresponding epipolar “curves ”. To conduct stereo matching, we further derive a new disparity term in our model and develop a patch-based graph-cut solution. To validate our theory, we assemble an XSlit lens by using a pair of cylindrical lenses coupled with slit-shaped apertures. The XSlit lens can be mounted on commodity cameras where the slit directions are adjustable to form desirable R-XSlit pairs. We show through experiments that R-XSlitprovides apotentially advantageous imaging system for conducting fixed-location, dynamic baseline stereo.
2 0.89378846 304 iccv-2013-PM-Huber: PatchMatch with Huber Regularization for Stereo Matching
Author: Philipp Heise, Sebastian Klose, Brian Jensen, Alois Knoll
Abstract: Most stereo correspondence algorithms match support windows at integer-valued disparities and assume a constant disparity value within the support window. The recently proposed PatchMatch stereo algorithm [7] overcomes this limitation of previous algorithms by directly estimating planes. This work presents a method that integrates the PatchMatch stereo algorithm into a variational smoothing formulation using quadratic relaxation. The resulting algorithm allows the explicit regularization of the disparity and normal gradients using the estimated plane parameters. Evaluation of our method in the Middlebury benchmark shows that our method outperforms the traditional integer-valued disparity strategy as well as the original algorithm and its variants in sub-pixel accurate disparity estimation.
3 0.81685066 252 iccv-2013-Line Assisted Light Field Triangulation and Stereo Matching
Author: Zhan Yu, Xinqing Guo, Haibing Lin, Andrew Lumsdaine, Jingyi Yu
Abstract: Light fields are image-based representations that use densely sampled rays as a scene description. In this paper, we explore geometric structures of 3D lines in ray space for improving light field triangulation and stereo matching. The triangulation problem aims to fill in the ray space with continuous and non-overlapping simplices anchored at sampled points (rays). Such a triangulation provides a piecewise-linear interpolant useful for light field superresolution. We show that the light field space is largely bilinear due to 3D line segments in the scene, and direct triangulation of these bilinear subspaces leads to large errors. We instead present a simple but effective algorithm to first map bilinear subspaces to line constraints and then apply Constrained Delaunay Triangulation (CDT). Based on our analysis, we further develop a novel line-assisted graphcut (LAGC) algorithm that effectively encodes 3D line constraints into light field stereo matching. Experiments on synthetic and real data show that both our triangulation and LAGC algorithms outperform state-of-the-art solutions in accuracy and visual quality.
4 0.73163342 88 iccv-2013-Constant Time Weighted Median Filtering for Stereo Matching and Beyond
Author: Ziyang Ma, Kaiming He, Yichen Wei, Jian Sun, Enhua Wu
Abstract: Despite the continuous advances in local stereo matching for years, most efforts are on developing robust cost computation and aggregation methods. Little attention has been seriously paid to the disparity refinement. In this work, we study weighted median filtering for disparity refinement. We discover that with this refinement, even the simple box filter aggregation achieves comparable accuracy with various sophisticated aggregation methods (with the same refinement). This is due to the nice weighted median filtering properties of removing outlier error while respecting edges/structures. This reveals that the previously overlooked refinement can be at least as crucial as aggregation. We also develop the first constant time algorithmfor the previously time-consuming weighted median filter. This makes the simple combination “box aggregation + weighted median ” an attractive solution in practice for both speed and accuracy. As a byproduct, the fast weighted median filtering unleashes its potential in other applications that were hampered by high complexities. We show its superiority in various applications such as depth upsampling, clip-art JPEG artifact removal, and image stylization.
5 0.69125998 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera
Author: Jakob Engel, Jürgen Sturm, Daniel Cremers
Abstract: We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking which does not depend on visual features while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is oflargepractical valuefor robotics and augmented reality applications. – – 1. Towards Dense Monocular Visual Odometry Tracking a hand-held camera and recovering the threedimensional structure of the environment in real-time is among the most prominent challenges in computer vision. In the last years, dense approaches to these challenges have become increasingly popular: Instead of operating solely on visual feature positions, they reconstruct and track on the whole image using a surface-based map and thereby are fundamentally different from feature-based approaches. Yet, these methods are to date either not real-time capable on standard CPUs [11, 15, 17] or require direct depth measurements from the sensor [7], making them unsuitable for many practical applications. In this paper, we propose a novel semi-dense visual odometry approach for a monocular camera, which combines the accuracy and robustness of dense approaches with the efficiency of feature-based methods. Further, it computes highly accurate semi-dense depth maps from the monocular images, providing rich information about the 3D ∗ This work was supported by the ERC Starting Grant ConvexVision and the DFG project Mapping on Demand Figure1.Semi-Dens MoncularVisualOdometry:Oucfrloas rpe- proach works on a semi-dense inverse depth map and combines the accuracy and robustness of dense visual SLAM methods with the efficiency of feature-based techniques. Left: video frame, Right: color-coded semi-dense depth map, which consists of depth estimates in all image regions with sufficient structure. structure of the environment. We use the term visual odometry as supposed to SLAM, as for simplicity we deliberately maintain only information about the currently visible scene, instead of building a global world-model. – – 1.1. Related Work Feature-based monocular SLAM. In all feature-based methods (such as [4, 8]), tracking and mapping consists of two separate steps: First, discrete feature observations (i.e., their locations in the image) are extracted and matched to each other. Second, the camera and the full feature poses are calculated from a set of such observations disregarding the images themselves. While this preliminary abstrac– tion step greatly reduces the complexity of the overall problem and allows it to be tackled in real time, it inherently comes with two significant drawbacks: First, only image information conforming to the respective feature type and parametrization typically image corners and blobs [6] or line segments [9] is utilized. Second, features have to be matched to each other, which often requires the costly computation of scale- and rotation-invariant descriptors and robust outlier estimation methods like RANSAC. – – Dense monocular SLAM. To overcome these limitations and to better exploit the available image information, dense monocular SLAM methods [11, 17] have recently been proposed. The fundamental difference to keypoint-based approaches is that these methods directly work on the images 11444499 instead of a set of extracted features, for both mapping and tracking: The world is modeled as dense surface while in turn new frames are tracked using whole-image alignment. This concept removes the need for discrete features, and allows to exploit all information present in the image, increasing tracking accuracy and robustness. To date however, doing this in real-time is only possible using modern, powerful GPU processors. Similar methods are broadly used in combination with RGB-D cameras [7], which directly measure the depth of each pixel, or stereo camera rigs [3] greatly reducing the – complexity of the problem. Dense multi-view stereo. Significant prior work exists on multi-view dense reconstruction, both in a real-time setting [13, 11, 15], as well as off-line [5, 14]. In particular for offline reconstruction, there is a long history of using different baselines to steer the stereo-inherent trade-off between accuracy and precision [12]. Most similar to our approach is the early work of Matthies et al., who proposed probabilistic depth map fusion and propagation for image sequences [10], however only for structure from motion, i.e., not coupled with subsequent dense tracking. 1.2. Contributions In this paper, we propose a novel semi-dense approach to monocular visual odometry, which does not require feature points. The key concepts are • a probabilistic depth map representation, • tracking based on whole-image alignment, • the reduction on image-regions which carry informattihoen (esdeumctii-odenn osen), i manadg • the full incorporation of stereo measurement uncertainty. To the best of our knowledge, this is the first featureless, real-time monocular visual odometry approach, which runs in real-time on a CPU. 1.3. Method Outline Our approach is partially motivated by the basic principle that for most real-time applications, video information is abundant and cheap to come by. Therefore, the computational budget should be spent such that the expected information gain is maximized. Instead of reducing the images to a sparse set of feature observations however, our method continuously estimates a semi-dense inverse depth map for the current frame, i.e., a dense depth map covering all image regions with non-negligible gradient (see Fig. 2). It is comprised of one inverse depth hypothesis per pixel modeled by a Gaussian probability distribution. This representation still allows to use whole-image alignment [7] to track new orignalimagesemi-densedepthmap(ours)clfoasre keypointdepthmap[8]densedepthmap[1 ]RGB-Dcamera[16] Figure 2. Semi-Dense Approach: Our approach reconstructs and tracks on a semi-dense inverse depth map, which is dense in all image regions carrying information (top-right). For comparison, the bottom row shows the respective result from a keypoint-based approach, a fully dense approach and the ground truth from an RGB-D camera. frames, while at the same time greatly reducing computational complexity compared to volumetric methods. The estimated depth map is propagated from frame to frame, and updated with variable-baseline stereo comparisons. We explicitly use prior knowledge about a pixel’s depth to select a suitable reference frame on a per-pixel basis, and to limit the disparity search range. The remainder of this paper is organized as follows: Section 2 describes the semi-dense mapping part of the proposed method, including the derivation of the observation accuracy as well as the probabilistic data fusion, propagation and regularization steps. Section 3 describes how new frames are tracked using whole-image alignment, and Sec. 4 summarizes the complete visual odometry method. A qualitative as well as a quantitative evaluation is presented in Sec. 5. We then give a brief conclusion in Sec. 6. 2. Semi-Dense Depth Map Estimation One of the key ideas proposed in this paper is to estimate a semi-dense inverse depth map for the current camera image, which in turn can be used for estimating the camera pose of the next frame. This depth map is continuously propagated from frame to frame, and refined with new stereo depth measurements, which are obtained by performing per-pixel, adaptive-baseline stereo comparisons. This allows us to accurately estimate the depth both of close-by and far-away image regions. In contrast to previous work that accumulates the photometric cost over a sequence of several frames [11, 15], we keep exactly one inverse depth hypothesis per pixel that we represent as Gaussian probability distribution. This section is comprised of three main parts: Sec11445500 reference small baseline medium baseline large baseline tcso0120 .050.10.150.20.2sl5m areagdleiulm0.3 inverse depth d Figure 3. Variable Baseline Stereo: Reference image (left), three stereo images at different baselines (right), and the respective matching cost functions. While a small baseline (black) gives a unique, but imprecise minimum, a large baseline (red) allows for a very precise estimate, but has many false minima. tion 2. 1 describes the stereo method used to extract new depth measurements from previous frames, and how they are incorporated into the prior depth map. In Sec. 2.2, we describe how the depth map is propagated from frame to frame. In Sec. 2.3, we detail how we partially regularize the obtained depth map in each iteration, and how outliers are handled. Throughout this section, d denotes the inverse depth of a pixel. 2.1. Stereo-Based Depth Map Update It is well known [12] that for stereo, there is a trade-off between precision and accuracy (see Fig. 3). While many multiple-baseline stereo approaches resolve this by accumulating the respective cost functions over many frames [5, 13], we propose a probabilistic approach which explicitly takes advantage of the fact that in a video, smallbaseline frames are available before large-baseline frames. The full depth map update (performed once for each new frame) consists of the following steps: First, a subset of pixels is selected for which the accuracy of a disparity search is sufficiently large. For this we use three intuitive and very efficiently computable criteria, which will be derived in Sec. 2. 1.3. For each selected pixel, we then individually select a suitable reference frame, and perform a onedimensional disparity search. Propagated prior knowledge is used to reduce the disparity search range when possible, decreasing computational cost and eliminating false minima. The obtained inverse depth estimate is then fused into the depth map. 2.1.1 Reference Frame Selection Ideally, the reference frame is chosen such that it maximizes the stereo accuracy, while keeping the disparity search range as well as the observation angle sufficiently cur ent framepixel’s “age” -4.8 s -3.9 s -3.1 s -2.2 s -1.2 s -0.8 s -0.5 s -0.4 s Figure 4. Adaptive Baseline Selection: For each pixel in the new frame (top left), a different stereo-reference frame is selected, based on how long the pixel was visible (top right: the more yellow, the older the pixel.). Some of the reference frames are displayed below, the red regions were used for stereo comparisons. small. As the stereo accuracy depends on many factors and because this selection is done for each pixel independently, we employ the following heuristic: We use the oldest frame the pixel was observed in, where the disparity search range and the observation angle do not exceed a certain threshold (see Fig. 4). If a disparity search is unsuccessful (i.e., no good match is found), the pixel’s “age” is increased, such that subsequent disparity searches use newer frames where the pixel is likely to be still visible. 2.1.2 Stereo Matching Method We perform an exhaustive search for the pixel’s intensity along the epipolar line in the selected reference frame, and then perform a sub-pixel accurate localization of the matching disparity. If a prior inverse depth hypothesis is available, the search interval is limited by d 2σd, where d and σd de,e nthoete s etharec mean avnadl ssta lnimdaiterdd d beyv dia ±tion 2σ σof the prior hypothesis. Otherwise, the full disparity range is searched. In our implementation, we use the SSD error over five equidistant points on the epipolar line: While this significantly increases robustness in high-frequent image regions, it does not change the purely one-dimensional nature of this search. Furthermore, it is computationally efficient, as 4 out ± of 5 interpolated image values can be re-used for each SSD evaluation. 2.1.3 Uncertainty Estimation In this section, we use uncertainty propagation to derive an expression for the error variance σd2 on the inverse depth d. 11445511 In general this can be done by expressing the optimal inverse depth d∗ as a function of the noisy inputs here we consider the images I0, I1 themselves, their relative orientation ξ and the camera calibration in terms of a projection function π1 – d∗ = d(I0, I1, ξ, π) . The error-variance of d∗ is then given by σd2 = JdΣJdT, (1) (2) where Jd is the Jacobian of d, and Σ the covariance of the input-error. For more details on covariance propagation, including the derivation of this formula, we refer to [2]. For simplicity, the following analysis is performed for patchfree stereo, i.e., we consider only a point-wise search for a single intensity value along the epipolar line. For this analysis, we split the computation into three steps: First, the epipolar line in the reference frame is computed. Second, the best matching position λ∗ ∈ R along it (i.e., the disparity) is determined. Third, the i∈nv eRrse al depth d∗ is computed from the disparity λ∗ . The first two steps involve two independent error sources: the geometric error, which originates from noise on ξ and π and affects the first step, and the photometric error, which originates from noise in the images I0, I1 and affects the second step. The third step scales these errors by a factor, which depends on the baseline. Geometric disparity error. The geometric error is the error ?λ on the disparity λ∗ caused by noise on ξ and π. While it would be possible to model, propagate, and estimate the complete covariance on ξ and π, we found that the gain in accuracy does not justify the increase in computational complexity. We therefore use an intuitive approximation: Let the considered epipolar line segment L ⊂ R2 be deLfineted th by L := ?l0 + λ?llyx? |λ ∈ S? , (3) where λ is the disparity with search interval S, (lx , ly)T the normalized epipolar line direction and l0 the point corresponding to infinite depth. We now assume that only the absolute position of this line segment, i.e., l0 is subject to isotropic Gaussian noise ?l . As in practice we keep the searched epipolar line segments short, the influence of rotational error is small, making this a good approximation. Intuitively, a positioning error ?l on the epipolar line causes a small disparity error ?λ if the epipolar line is parallel to the image gradient, and a large one otherwise (see Fig. 5). This can be mathematically derived as follows: The image constrains the optimal disparity λ∗ to lie on a certain isocurve, i.e. a curve of equal intensity. We approximate 1In the linear case, this is the camera matrix K – in practice however, nonlinear distortion and other (unmodeled) effects also play a role. FiguLre5.Geo?l mλetricDigs,palrityEroL?rl:Influe?nλceofgasmla posi- tioning error ?l of the epipolar line on the disparity error ?λ . The dashed line represents the isocurve on which the matching point has to lie. ?λ is small if the epipolar line is parallel to the image gradient (left), and a large otherwise (right). this isocurve to be locally linear, i.e. the gradient direction to be locally constant. This gives l0 + λ∗ ?llxy? =! + γ?−gxgy?, g0 γ ∈ R (4) where g := (gx , gy) ?is the image gradient and g0 a point on the isoline. The influence of noise on the image values will be derived in the next paragraph, hence at this point g and g0 are assumed noise-free. Solving for λ gives the optimal disparity λ∗ in terms of the noisy input l0: λ∗(l0) =?g,g?g0,−l? l0? (5) Analogously to (2), the variance of the geometric disparity error can then be expressed as σλ2(ξ,π)= Jλ∗(l0)?σ0l2 σ0l2?JλT∗(l0)=?gσ,l 2?2, (6) where g is the normalized image gradient, lthe normalized epipolar line direction and σl2 the variance of ?l. Note that this error term solely originates from noise on the relative camera orientation and the camera calibration π, i.e., it is independent of image intensity noise. ξ Photometric disparity error. Intuitively, this error encodes that small image intensity errors have a large effect on the estimated disparity if the image gradient is small, and a small effect otherwise (see Fig. 6). Mathematically, this relation can be derived as follows. We seek the disparity λ∗ that minimizes the difference in intensities, i.e., λ∗ = mλin (iref − Ip(λ))2, (7) where iref is the reference intensity, and Ip(λ) the image intensity on the epipolar line at disparity λ. We assume a good initialization λ0 to be available from the exhaustive search. Using a first-order Taylor approximation for Ip gives λ∗(I) = λ0 + (iref − Ip(λ0)) g−p1, (8) where gp is the gradient of Ip, that is image gradient along the epipolar line. For clarity we only consider noise on iref and Ip(λ0) ; equivalent results are obtained in the general case when taking into account noise on the image values involved in the computation of gp. The variance of the pho11445522 ?i Ip?λ ?iiIp?λλ Figure 6. Photometric Disparity Error: Noise ?i on the image intensity values causes a small disparity error ?λ if the image gradient along the epipolar line is large (left). If the gradient is small, the disparity error is magnified (right). tometric disparity error is given by σλ2(I) = Jλ∗(I)?σ0i2 σ0i2?Jλ∗(I) =2gσ2pi2, (9) where σi2 is the variance of the image intensity noise. The respective error originates solely from noisy image intensity values, and hence is independent of the geometric disparity error. Pixel to inverse depth conversion. Using that, for small camera rotation, the inverse depth d is approximately proportional to the disparity λ, the observation variance of the inverse depth σd2,obs can be calculated using σd2,obs = α2 ?σ2λ(ξ,π) + σλ2(I)? , (10) where the proportionality ?constant α in th?e general, nonrectified case – is different for each pixel, and can be calculated from – α :=δδdλ, (11) where δd is the length of the searched inverse depth interval, and δλ the length of the searched epipolar line segment. While α is inversely linear in the length of the camera translation, it also depends on the translation direction and the pixel’s location in the image. When using an SSD error over multiple points along the epipolar line – as our implementation does – a good upper bound for the matching uncertainty is then given by ?min{σ2λ(ξ,π)} + min{σλ2(I)}? σd2,obs-SSD ≤ α2 , (12) where the min goes over all points included in the? SSD error. 2.1.4 Depth Observation Fusion After a depth observation for a pixel in the current image has been obtained, we integrate it into the depth map as follows: If no prior hypothesis for a pixel exists, we initialize it directly with the observation. Otherwise, the new observation is incorporated into the prior, i.e., the two distribu- tions are multiplied (corresponding to the update step in a Knoailsmya onb fsieltrvera)t:io Gniv Nen(do a, pσrio2o)r, d thiest priobsutetiroionr N is( gdipv,eσnp2 b)y and a N?σ2pdσo2p++ σ σo2o2dp,σ2σpp2+σo2 σo2?. 2.1.5 (13) Summary of Uncertainty-Aware Stereo New stereo observations are obtained on a per-pixel basis, adaptively selecting for each pixel a suitable reference frame and performing a one-dimensional search along the epipolar line. We identified the three major factors which determine the accuracy of such a stereo observation, i.e., • the photometric disparity error σλ2(ξ,π), depending on tphheo magnitude sofp trhiet image gradient along the epipolar line, • the geometric disparity error σλ2(I) ,depending on the athnegl gee bometewtereinc dthisep image gradient and the epipolar line (independent of the gradient magnitude), and • the pixel to inverse depth ratio α, depending on the camera etlra tons ilantvioenrs, eth dee pfothcal r length ,a dndep tehned pixel’s position. These three simple-to-compute and purely local criteria are used to determine for which pixel a stereo update is worth the computational cost. Further, the computed observation variance is then used to integrate the new measurements into the existing depth map. 2.2. Depth Map Propagation We continuously propagate the estimated inverse depth map from frame to frame, once the camera position of the next frame has been estimated. Based on the inverse depth estimate d0 for a pixel, the corresponding 3D point is calculated and projected into the new frame, providing an inverse depth estimate d1 in the new frame. The hypothesis is then assigned to the closest integer pixel position to eliminate discretization errors, the sub-pixel accurate image location of the projected point is kept, and re-used for the next propagation step. For propagating the inverse depth variance, we assume the camera rotation to be small. The new inverse depth d1 can then be approximated by – d1(d0) = (d0−1 − tz)−1, (14) where tz is the camera translation along the optical axis. The variance of d1 is hence given by σd21= Jd1σd20JTd1+ σp2=?dd01?4σd20+ σp2, (15) where σp2 is the prediction uncertainty, which directly corresponds to the prediction step in an extended Kalman filter. It can also be interpreted as keeping the variance on 11445533 in the top right shows the new frame I2 (x) without depth information. Middle: Intermediate steps while minimizing E(ξ) on different pyramid levels. The top row shows the back-warped new frame I2 (w(x, d, ξ)), the bottom row shows the respective residual image I2 (w(x, di,ξ)) − I1 (x) . The bottom right image shows the final pixel-weights (black = small weight). Small weights mainly correspond to newly oc,cξl)ud)e −d or disoccluded pixel. tWhe z fo-cuonodrtd hina t uesi onfg a sm poailnlt v failxue ds, fo i.re. σ,p2 sedteticnrgea σsez2s0 d=rift σ,z2 a1s. it causes the estimated geometry to gradually ”lock” into place. Collision handling. At all times, we allow at most one inverse depth hypothesis per pixel: If two inverse depth hypothesis are propagated to the same pixel in the new frame, we distinguish between two cases: 1. if they are statistically similar, i.e., lie within 2σ bounds, they are treated as two independent observations of the pixel’s depth and fused according to (13). 2. otherwise, the point that is further away from the camera is assumed to be occluded, and is removed. 2.3. Depth Map Regularization For each frame – after all observations have been incorporated – we perform one regularization iteration by assign- ing each inverse depth value the average of the surrounding inverse depths, weighted by their respective inverse variance. To preserve sharp edges, if two adjacent inverse depth values are statistically different, i.e., are further away than 2σ, they do not contribute to one another. Note that the respective variances are not changed during regularization to account for the high correlation between neighboring hypotheses. Instead we use the minimal variance of all neighboring pixel when defining the stereo search range, and as a weighting factor for tracking (see Sec. 3). Outlier removal. To handle outliers, we continuously keep track of the validity of each inverse depth hypothesis in terms of the probability that it is an outlier, or has become invalid (e.g., due to occlusion or a moving object). For each successful stereo observation, this probability is decreased. It is increased for each failed stereo search, if the respective intensity changes significantly on propagation, or when the absolute image gradient falls below a given threshold. If, during regularization, the probability that all contributing neighbors are outliers i.e., the product of their individual outlier-probabilities rises above a given threshold, the hypothesis is removed. Equally, if for an “empty” pixel this product drops below a given threshold, a new hypothesis is created from the neighbors. This fills holes arising from the forward-warping nature of the propagation step, and dilates the semi-dense depth map to a small neighborhood around sharp image intensity edges, which signifi– – × cantly increases tracking and mapping robustness. 3. Dense Tracking Based on the inverse depth map of the previous frame, we estimate the camera pose of the current frame using dense image alignment. Such methods have previously been applied successfully (in real-time on a CPU) for tracking RGB-D cameras [7], which directly provide dense depth measurements along with the color image. It is based on the direct minimization of the photometric error ri (ξ) := (I2 (w(xi, di , ξ)) − I1 , (16) where the warp function w : Ω1 R R6 → Ω2 maps each point xi ∈ Ω1 in the reference× image RI1 →to Ωthe respective point w(x∈i, Ωdi, ξ) ∈ Ω2 in the new image I2. As input it requires the 3D,ξ pose Ωof the camera ξ ∈ R6 and uses the reestqiumiraetesd t hienv 3erDse p depth fd it ∈e cRa mfore rthae ξ pixel in I1. Note that no depth information with respect t toh Ie2 p i sx required. To increase robustness to self-occlusion and moving objects, we apply a weighting scheme as proposed in [7]. Further, we add the variance of the inverse depth σd2i as an additional weighting term, making the tracking resistant to recently initialized and still inaccurate depth estimates from 11445544 (xi))2 Figure 8. Examples: Top: Camera images overlaid with the respective stimated semi-dense inverse depth map. Bot om: 3D view of tracked scene. Note the versatility of our approach: It accurately reconstructs and tracks through (outside) scenes with a large depth- variance, including far-away objects like clouds , as well as (indoor) scenes with little structure and close to no image corners / keypoints. More examples are shown in the attached video. the mapping process. The final energy that is minimized is hence given by E(ξ) :=?iα(rσid2(iξ))ri(ξ), (17) where α : R → R defines the weight for a given residual. Minimizing t h→is error can b thee interpreted as computing uthale. maximum likelihood estimator for ξ, assuming independent noise on the image intensity values. The resulting weighted least-squares problem is solved efficiently using an iteratively reweighted Gauss-Newton algorithm coupled with a coarse-to-fine approach, using four pyramid levels. Figure 7 shows an example of the tracking process. For further details on the minimization we refer to [1]. 4. System Overview Tracking and depth estimation is split into two separate threads: One continuously propagates the inverse depth map to the most recent tracked frame, updates it with stereocomparisons and partially regularizes it. The other simultaneously tracks each incoming frame on the most recent available depth map. While tracking is performed in real- time at 30Hz, one complete mapping iteration takes longer and is hence done at roughly 15Hz if the map is heavily populated, we adaptively reduce the number of stereo comparisons to maintain a constant frame-rate. For stereo observations, a buffer of up to 100 past frames is kept, automatically removing those that are used least. We use a standard, keypoint-based method to obtain the relative camera pose between two initial frames, which are then used to initialize the inverse depth map needed for tracking successive frames. From this point onward, our method is entirely self-contained. In preliminary experiments, we found that in most cases our approach is even able to recover from random or extremely inaccurate initial depth maps, indicating that the keypoint-based initialization might become superfluous in the future. Table 1. Results on RGB-D Benchmark position drift (cm/s) rotation drift (deg/s) ours [7] [8] ours [7] [8] – fr2/xyz fr2/desk 0.6 2.1 0.6 2.0 8.2 - 0.33 0.65 0.34 0.70 3.27 - 5. Results We have tested our approach on both publicly available benchmark sequences, as well as live, using a hand-held camera. Some examples are shown in Fig. 8. Note that our method does not attempt to build a global map, i.e., once a point leaves the field of view of the camera or becomes occluded, the respective depth value is deleted. All experiments are performed on a standard consumer laptop with Intel i7 quad-core CPU. In a preprocessing step, we rectify all images such that a pinhole camera-model can be applied. 5.1. RGB-D Benchmark Sequences As basis for a quantitative evaluation and to facilitate reproducibility and easy comparison with other methods, we use the TUM RGB-D benchmark [16]. For tracking and mapping we only use the gray-scale images; for the very first frame however the provided depth image is used as initialization. Our method (like any monocular visual odometry method) fails in case of pure camera rotation, as the depth of new regions cannot be determined. The achieved tracking accuracy for two feasible sequences that is, sequences which do not contain strong camera rotation without simultaneous translation is given in Table 1. For comparison we also list the accuracy from (1) a state-of-the-art, dense RGB-D odometry [7], and (2) a state-of-the-art, keypointbased monocular SLAM system (PTAM, [8]). We initialize PTAM using the built-in stereo initializer, and perform a 7DoF (rigid body plus scale) alignment to the ground truth trajectory. Figure 9 shows the tracked camera trajectory for fr2/desk. We found that our method achieves similar accu– – 11445555 era the the the trajectory (black), the depth map of the first frame (blue), and estimated depth map (gray-scale) after a complete loop around table. Note how well certain details such as the keyboard and monitor align. racy as [7] which uses the same dense tracking algorithm but relies on the Kinect depth images. The keypoint-based approach [8] proves to be significantly less accurate and robust; it consistently failed after a few seconds for the second sequence. 5.2. Additional Test Sequences To analyze our approach in more detail, we recorded additional challenging sequences with the corresponding ground truth trajectory in a motion capture studio. Figure 10 shows an extract from the video, as well as the tracked and the ground-truth camera position over time. As can be seen from the figure, our approach is able to maintain a reasonably dense depth map at all times and the estimated camera trajectory matches closely the ground truth. 6. Conclusion In this paper we proposed a novel visual odometry method for a monocular camera, which does not require discrete features. In contrast to previous work on dense tracking and mapping, our approach is based on probabilistic depth map estimation and fusion over time. Depth measurements are obtained from patch-free stereo matching in different reference frames at a suitable baseline, which are selected on a per-pixel basis. To our knowledge, this is the first featureless monocular visual odometry method which runs in real-time on a CPU. In our experiments, we showed that the tracking performance of our approach is comparable to that of fully dense methods without requiring a depth sensor. References [1] S. Baker and I. Matthews. Lucas-Kanade 20 years on: A unifying framework. Technical report, Carnegie Mellon Univ., 2002. 7 [2] A. Clifford. Multivariate Error Analysis. John Wiley & Sons, 1973. 4 sionpito[m ]− 024 2 0 s1xzy0s20s30s40s50s60s Figure 10. Additional Sequence: Estimated camera trajectory and ground truth (dashed) for a long and challenging sequence. The complete sequence is shown in the attached video. [3] A. Comport, E. Malis, and P. Rives. Accurate quadri-focal tracking for robust 3d visual odometry. In ICRA, 2007. 2 [4] A. Davison, I. Reid, N. Molton, and O. Stasse. MonoSLAM: Real-time single camera SLAM. Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 29, 2007. 1 [5] D. Gallup, J. Frahm, P. Mordohai, and M. Pollefeys. Variable baseline/resolution stereo. In CVPR, 2008. 2, 3 [6] C. Harris and M. Stephens. A combined corner and edge detector. In Alvey Vision Conference, 1988. 1 [7] C. Kerl, J. Sturm, and D. Cremers. Robust odometry estimation for RGB-D cameras. In ICRA, 2013. 1, 2, 6, 7, 8 [8] G. Klein and D. Murray. Parallel tracking and mapping for small AR workspaces. In Mixed and Augmented Reality (ISMAR), 2007. 1, 2, 7, 8 [9] G. Klein and D. Murray. Improving the agility of keyframebased SLAM. In ECCV, 2008. 1 [10] M. Pollefes et al. Detailed real-time urban 3d reconstruction from video. IJCV, 78(2-3): 143–167, 2008. 2, 3 [11] L. Matthies, R. Szeliski, and T. Kanade. Incremental estimation of dense depth maps from image image sequences. In CVPR, 1988. 2 [12] R. Newcombe, S. Lovegrove, and A. Davison. DTAM: Dense tracking and mapping in real-time. In ICCV, 2011. 1, 2 [13] M. Okutomi and T. Kanade. A multiple-baseline stereo. Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 15(4):353–363, 1993. 2, 3 [14] T. Sato, M. Kanbara, N. Yokoya, and H. Takemura. Dense 3-d reconstruction of an outdoor scene by hundreds-baseline stereo using a hand-held camera. IJCV, 47: 1–3, 2002. 2 [15] J. Stuehmer, S. Gumhold, and D. Cremers. Real-time dense geometry from a handheld camera. In Pattern Recognition (DAGM), 2010. 1, 2 [16] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A benchmark for the evaluation of RGB-D SLAM systems. In Intelligent Robot Systems (IROS), 2012. 2, 7 [17] A. Wendel, M. Maurer, G. Graber, T. Pock, and H. Bischof. Dense reconstruction on-the-fly. In ECCV, 2012. 1 11445566
6 0.60415035 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies
7 0.54099774 255 iccv-2013-Local Signal Equalization for Correspondence Matching
8 0.50282729 108 iccv-2013-Depth from Combining Defocus and Correspondence Using Light-Field Cameras
9 0.46973875 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes
10 0.44237301 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video
11 0.43838346 284 iccv-2013-Multiview Photometric Stereo Using Planar Mesh Parameterization
12 0.41594329 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones
13 0.4032034 209 iccv-2013-Image Guided Depth Upsampling Using Anisotropic Total Generalized Variation
14 0.39096457 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction
15 0.35924274 348 iccv-2013-Refractive Structure-from-Motion on Underwater Images
16 0.35018846 199 iccv-2013-High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination
17 0.3474012 387 iccv-2013-Shape Anchors for Data-Driven Multi-view Reconstruction
18 0.34606218 317 iccv-2013-Piecewise Rigid Scene Flow
19 0.3402009 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera
20 0.327328 18 iccv-2013-A Joint Intensity and Depth Co-sparse Analysis Model for Depth Map Super-resolution
topicId topicWeight
[(2, 0.095), (7, 0.022), (12, 0.015), (26, 0.079), (27, 0.014), (31, 0.042), (35, 0.016), (37, 0.251), (40, 0.012), (42, 0.094), (48, 0.012), (64, 0.024), (73, 0.019), (78, 0.012), (89, 0.147), (95, 0.01), (98, 0.026)]
simIndex simValue paperId paperTitle
1 0.78214002 296 iccv-2013-On the Mean Curvature Flow on Graphs with Applications in Image and Manifold Processing
Author: Abdallah El_Chakik, Abderrahim Elmoataz, Ahcene Sadi
Abstract: In this paper, we propose an adaptation and transcription of the mean curvature level set equation on a general discrete domain (weighted graphs with arbitrary topology). We introduce the perimeters on graph using difference operators and define the curvature as the first variation of these perimeters. Our proposed approach of mean curvature unifies both local and non local notions of mean curvature on Euclidean domains. Furthermore, it allows the extension to the processing of manifolds and data which can be represented by graphs.
same-paper 2 0.76384813 28 iccv-2013-A Rotational Stereo Model Based on XSlit Imaging
Author: Jinwei Ye, Yu Ji, Jingyi Yu
Abstract: Traditional stereo matching assumes perspective viewing cameras under a translational motion: the second camera is translated away from the first one to create parallax. In this paper, we investigate a different, rotational stereo model on a special multi-perspective camera, the XSlit camera [9, 24]. We show that rotational XSlit (R-XSlit) stereo can be effectively created by fixing the sensor and slit locations but switching the two slits’ directions. We first derive the epipolar geometry of R-XSlit in the 4D light field ray space. Our derivation leads to a simple but effective scheme for locating corresponding epipolar “curves ”. To conduct stereo matching, we further derive a new disparity term in our model and develop a patch-based graph-cut solution. To validate our theory, we assemble an XSlit lens by using a pair of cylindrical lenses coupled with slit-shaped apertures. The XSlit lens can be mounted on commodity cameras where the slit directions are adjustable to form desirable R-XSlit pairs. We show through experiments that R-XSlitprovides apotentially advantageous imaging system for conducting fixed-location, dynamic baseline stereo.
3 0.76254714 110 iccv-2013-Detecting Curved Symmetric Parts Using a Deformable Disc Model
Author: Tom Sie Ho Lee, Sanja Fidler, Sven Dickinson
Abstract: Symmetry is a powerful shape regularity that’s been exploited by perceptual grouping researchers in both human and computer vision to recover part structure from an image without a priori knowledge of scene content. Drawing on the concept of a medial axis, defined as the locus of centers of maximal inscribed discs that sweep out a symmetric part, we model part recovery as the search for a sequence of deformable maximal inscribed disc hypotheses generated from a multiscale superpixel segmentation, a framework proposed by [13]. However, we learn affinities between adjacent superpixels in a space that’s invariant to bending and tapering along the symmetry axis, enabling us to capture a wider class of symmetric parts. Moreover, we introduce a global cost that perceptually integrates the hypothesis space by combining a pairwise and a higher-level smoothing term, which we minimize globally using dynamic programming. The new framework is demonstrated on two datasets, and is shown to significantly outperform the baseline [13].
4 0.66717374 228 iccv-2013-Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences
Author: Frank Steinbrücker, Christian Kerl, Daniel Cremers
Abstract: We propose a method to generate highly detailed, textured 3D models of large environments from RGB-D sequences. Our system runs in real-time on a standard desktop PC with a state-of-the-art graphics card. To reduce the memory consumption, we fuse the acquired depth maps and colors in a multi-scale octree representation of a signed distance function. To estimate the camera poses, we construct a pose graph and use dense image alignment to determine the relative pose between pairs of frames. We add edges between nodes when we detect loop-closures and optimize the pose graph to correct for long-term drift. Our implementation is highly parallelized on graphics hardware to achieve real-time performance. More specifically, we can reconstruct, store, and continuously update a colored 3D model of an entire corridor of nine rooms at high levels of detail in real-time on a single GPU with 2.5GB.
5 0.64778519 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification
Author: Bo Wang, Zhuowen Tu, John K. Tsotsos
Abstract: In graph-based semi-supervised learning approaches, the classification rate is highly dependent on the size of the availabel labeled data, as well as the accuracy of the similarity measures. Here, we propose a semi-supervised multi-class/multi-label classification scheme, dynamic label propagation (DLP), which performs transductive learning through propagation in a dynamic process. Existing semi-supervised classification methods often have difficulty in dealing with multi-class/multi-label problems due to the lack in consideration of label correlation; our algorithm instead emphasizes dynamic metric fusion with label information. Significant improvement over the state-of-the-art methods is observed on benchmark datasets for both multiclass and multi-label tasks.
6 0.64505053 197 iccv-2013-Hierarchical Joint Max-Margin Learning of Mid and Top Level Representations for Visual Recognition
7 0.64417386 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization
8 0.64410055 180 iccv-2013-From Where and How to What We See
9 0.64351177 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction
10 0.64300436 448 iccv-2013-Weakly Supervised Learning of Image Partitioning Using Decision Trees with Structured Split Criteria
11 0.64198285 6 iccv-2013-A Convex Optimization Framework for Active Learning
12 0.64189309 404 iccv-2013-Structured Forests for Fast Edge Detection
14 0.64052927 21 iccv-2013-A Method of Perceptual-Based Shape Decomposition
15 0.63989007 426 iccv-2013-Training Deformable Part Models with Decorrelated Features
16 0.63961732 20 iccv-2013-A Max-Margin Perspective on Sparse Representation-Based Classification
17 0.63959467 239 iccv-2013-Learning Hash Codes with Listwise Supervision
18 0.63957536 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition
19 0.63937712 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation
20 0.63935876 383 iccv-2013-Semi-supervised Learning for Large Scale Image Cosegmentation