iccv iccv2013 iccv2013-254 knowledge-graph by maker-knowledge-mining

254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones


Source: pdf

Author: Petri Tanskanen, Kalin Kolev, Lorenz Meier, Federico Camposeco, Olivier Saurer, Marc Pollefeys

Abstract: unkown-abstract

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 We perform feature-based tracking and mapping in real time but leverage full inertial sensing in position and orientation to estimate the metric scale of the reconstructed 3D models and to make the process more resilient to sudden motions. [sent-2, score-0.862]

2 The approach leverages the inertial sensors to automatically select suitable keyframes when the phone is held still and uses the intermediate motion to calculate scale. [sent-4, score-0.942]

3 • We propose an efficient and accurate multi-resolution scheme for dense stereo matching which makes use of the capabilities of the GPU and allows to reduce the computational time for each processed image to about 2-3 seconds. [sent-6, score-0.19]

4 Related Work Our work is related to several fields in computer vision: visual inertial fusion, simultaneous localization and mapping (SLAM) and image-based modeling. [sent-8, score-0.681]

5 Visual inertial fusion is a well established technique [1]. [sent-9, score-0.641]

6 Lobo and Dias align depth maps of a stereo head using gravity as vertical reference in [6]. [sent-10, score-0.377]

7 [23] developed a method to estimate the scaling factor between the inertial sensors (gyroscope and accelerometer) and monocular SLAM approach and the offsets between the IMU and the camera. [sent-13, score-0.731]

8 [12] demonstrated a stripped-down version of a camera pose tracking system on an Android phone where the inertial sensors are utilized only to obtain a gravity reference and frame-to-frame rotations. [sent-15, score-1.117]

9 Klein and Murray [3] proposed a system for real-time parallel tracking and mapping (PTAM) which was demonstrated to work well also on smartphones [4]. [sent-17, score-0.233]

10 Thereby, the maintained 3D map is built from sparse point correspon- dences only. [sent-18, score-0.126]

11 [7] perform tracking, mapping and dense reconstruction on a high-end GPU in real time on a commodity computer to create a dense model of a desktop setup. [sent-20, score-0.194]

12 66 As the proposed reconstruction pipeline is based on stereo to infer geometric structure, it is related to a myriad of works on binocular and multi-view stereo. [sent-22, score-0.199]

13 Thereby, a 3D representation of the scene is obtained by estimating depth maps from multiple views and converting them to triangle meshes based on the respective con- nectivity. [sent-31, score-0.216]

14 Here, the captured scene is represented by a point cloud where each generated 3D point is obtained as a probabilistic depth estimate by fusing measurements from different views. [sent-33, score-0.235]

15 Even though the aforementioned techniques cover our context, they are designed for high-end computers and are not functional on mobile devices due to some timeconsuming optimization operations. [sent-34, score-0.136]

16 Recently, the first works on live 3D reconstruction on mobile devices appeared. [sent-35, score-0.194]

17 All demanding computations are performed on a separate server machine that provides visual feedback to a tablet computer. [sent-38, score-0.125]

18 [9] demonstrated an interactive system for 3D reconstruction capable of operating entirely on a mobile phone. [sent-40, score-0.183]

19 System Overview Our system consists of three main blocks: inertial track- × ing, visual pose estimation and dense 3D modeling, as depicted in Fig. [sent-49, score-0.761]

20 We take two main input streams: camera frames with resolution of 640 480 at tracking rates typically between 15-30 Hz and inertial sensor infor- Figure 2. [sent-52, score-0.773]

21 The inertial tracker provides camera poses which are subsequently refined by the visual tracking module. [sent-55, score-0.869]

22 The dense 3D modeling module is supplied with images and corresponding full calibration information at selected keyframes from the visual tracker as well as metric information about the captured scene from the inertial tracker. [sent-56, score-1.019]

23 The system is triggered automatically when the inertial estimator detects a salient motion with a minimal baseline. [sent-58, score-0.749]

24 Visual Inertial Scale Estimation Current smartphones are equipped with a 3D gyroscope and accelerometer, which produce (in contrast to larger inertial measurement units) substantial time-dependent and device-specific offsets, as well as significant noise. [sent-62, score-0.804]

25 To estimate scale, we first need to estimate the current world to body/camera frame rotation RB and the current earth-fixed velocity and position using the inertial sensors. [sent-63, score-0.937]

26 As the magnetometer and GPS are subject to large disturbances or even unavailable indoors as well as in many urban environments, we rely solely on the gyroscope and update the yaw angle with visual measurements mB. [sent-65, score-0.231]

27 We scale the gravity vector gB to the unit-length vector zB and estimate yB and xB using the additional heading information rzB=? [sent-66, score-0.196]

28 We initialize the required scale for visual-inertial fusion by first independently estimating motion segments. [sent-77, score-0.124]

29 Whenever the accelerometer reports significant motion, we create a new displacement hypothesis This is immediately verified by checking a start and stop event in the motion. [sent-79, score-0.324]

30 These are determined given that for sufficiently exciting handheld motion, the acceleration signal will exhibit two peaks of opposite sign and significant magnitude. [sent-80, score-0.117]

31 A displacement is then estimated and compared to the displacement estimated by vision ( y? [sent-81, score-0.174]

32 Each new measurement pair is stored and the complete set is reevaluated using the latest scale by considering a pair as inlier if ? [sent-85, score-0.133]

33 As soon as the scale estimation converges, we can update the inertial position with visual measurements. [sent-97, score-0.724]

34 In addition to providing an estimate of the scene scale, we produce a filtered position estimation as show in Fig 3. [sent-98, score-0.131]

35 This can be leveraged to process frames at lower rates or to mitigate intermediate visual tracking issues e. [sent-99, score-0.127]

36 Since the sample rate of the accelerometer is higher than the frame rate of the camera, we predict the position of the phone with each new accelerometer sample and update with the visual information whenever a new measurement is available. [sent-102, score-0.822]

37 With every new IMU sample, the accelerometer data is rotated and the gravity is accounted for in the inertial frame. [sent-103, score-0.986]

38 This acceleration is integrated using Velocity Verlet, which is in turn used for a decaying velocity model m−01248. [sent-104, score-0.249]

39 2 Eefc9sto0lir2(ZVc−aelo9xn4sidIV)MU+o6nsi9810 tmie [seconds] tmie [seconds] Convergence of scael estmiatoin 1100−−21REseitamls acteadels cael 20 25 30 time [se3c5onds] 40 45 50 Figure 3. [sent-107, score-0.138]

40 Left: simple inertial prediction and decaying velocity vs ground truth. [sent-108, score-0.768]

41 Right: visual-inertial estimate allows to partially reject tracking losses. [sent-109, score-0.128]

42 (6) Here τ accounts for timing and sensor inaccuracies (inherent of the operating system available on mobile phones) by providing a decaying velocity model, preventing unwanted drift at small accelerations (see Fig 3). [sent-115, score-0.298]

43 To adapt to the visual data, it is first scaled to metric units using λ and then fused with the inertial prediction using a simple linear combination based on the variances of both estimations x? [sent-116, score-0.671]

44 (7) Here the subscripts f, v and idenote fused, vision and inertial position estimates, respectively, and κ is the normalizing factor. [sent-122, score-0.643]

45 The visual updates become available with a time offset, so we need to re-propagate the predicted states from the point, at which the vision measurement occurred, to the current one [23]. [sent-123, score-0.153]

46 This is done by storing the sates in a buffer and, whenever a visual measurement arrives, looking back for the closest time-stamp in that buffer, updating and propagating forward to the current time. [sent-124, score-0.253]

47 Fig 4 shows the results of the combined vision and inertial fusion in a freehand 3D motion while tracking a tabletop scenario. [sent-125, score-0.765]

48 It is evident that scale and absolute position are correctly estimated throughout the trajectory. [sent-126, score-0.131]

49 To evaluate the estimated scale accuracy, metric reconstruction of a textured cylinder with a known diameter was performed. [sent-127, score-0.141]

50 This is mostly due to the inaccuracy in the magnitude of the measurements of the consumer-grade accelerometer on the 68 Scaled 3D trayectory s VTIca Cba Ol e d N mpoaspepoints z[m]−0 . [sent-129, score-0.323]

51 It should be noted that the accuracy of those measurements could be improved by calibrating the accelerometer with respect to the camera beforehand, but such investigations are left for future work. [sent-141, score-0.377]

52 The rest of the initialization follows the design of [3]: In order to get a denser initial map, FAST corners are then extracted on four resolution levels and for every corner a 8x8 pixel patch at the respective level is stored as descriptor. [sent-149, score-0.232]

53 The matching is done by comparing the zero-mean sum of squared differences (ZSSD) value between the pixel patches of the respective FAST corners along the epipolar line. [sent-150, score-0.221]

54 To speed up the process, only the segment of the epipolar line is searched that matches the estimated scene depth from the already triangulated points. [sent-151, score-0.366]

55 After the best match is found, the points are triangulated and included to the map which is subsequently refined with bundle adjustment. [sent-152, score-0.214]

56 Since the gravity vector is known from the inertial estimator, the map is also rotated such that it matches the earth inertial frame. [sent-153, score-1.409]

57 Patch Tracking and Pose Refinement The tracker is used to refine the pose estimate from the inertial pose estimator and to correct drift. [sent-156, score-0.849]

58 For every new camera frame FAST corners are extracted and matched with the projected map points onto the current view using the inertial pose. [sent-157, score-0.873]

59 The matching is done by warping the 8x8 pixel patch of the map point onto the view of the current frame and computing the ZSSD score. [sent-158, score-0.243]

60 For computing the warp the appropriate pyra- mid level in the current view is selected. [sent-160, score-0.14]

61 The matches are then optimized with a robust Levenberg-Marquart absolute pose estimator giving the new vision-based pose for the current frame. [sent-162, score-0.261]

62 If for some reason the tracking is lost the small blurry image relocalization module from [3] is used. [sent-163, score-0.205]

63 Sparse Mapping New keyframes are added to the map if the user has moved the camera a certain amount or if the inertial position estimator detects that the phone is held still after salient motion. [sent-166, score-1.097]

64 In either case, the keyframe is provided to the mapping thread that accepts the observations of the map points from the tracker and searches for new ones. [sent-167, score-0.266]

65 To minimize the possibility that new points are created at positions where such already exist, a mask is created to indicate the already covered regions. [sent-169, score-0.193]

66 Since the typical scene consists of an object in the middle of the scene, only map points that were observed from an angle of 60 degrees or less relative to the current frame are added to this mask. [sent-171, score-0.176]

67 Similar to [3], the mapper performs bundle adjustment optimization in the background. [sent-173, score-0.184]

68 After a keyframe is added, a local bundle adjustment step with the closest 4 keyframes is per- formed. [sent-175, score-0.338]

69 With a reduced priority, the mapper optimizes the keyframes that are prepared for the dense modeling module. [sent-176, score-0.285]

70 With lowest priority, the mapping thread starts global bundle adjustment optimization based on all frames and map points. [sent-178, score-0.276]

71 Dense 3D Modeling At the core of the 3D modeling module is a stereo-based reconstruction pipeline. [sent-181, score-0.174]

72 In particular, it is composed of image mask estimation, depth map computation and depth map filtering. [sent-182, score-0.549]

73 Finally, the filtered depth map is back projected to 3D, colored with respect to the reference image and merged with the current point cloud. [sent-184, score-0.368]

74 Image Mask Estimation The task of the maintained image mask is twofold. [sent-187, score-0.168]

75 A texture-based mask is computed by reverting to the Shi-Tomasi measure used also at the visual tracking stage (see Section 5). [sent-191, score-0.238]

76 Additionally, another mask is estimated based on the coverage of the current point cloud. [sent-195, score-0.264]

77 The final image mask is obtained by fusing the estimated texture and coverage mask. [sent-199, score-0.2]

78 Subsequent depth map computations are restricted to pixels within the mask. [sent-200, score-0.306]

79 We run binocular stereo by taking an incoming image as a reference view and matching it with an appropriate recent image in the provided series of keyframes. [sent-204, score-0.258]

80 Instead of applying a classical technique based on estimating the optimal similarity score along respective epipolar lines, we adopt a multi-resolution scheme. [sent-205, score-0.126]

81 The proposed approach involves downsampling the input images, estimating depths, and subsequently upgrading and refining the results by restricting computations to a suitable pixel-dependent range. [sent-206, score-0.126]

82 Similar to the visual tracking stage (see Section 5), we rely on computations at multiple pyramid resolutions. [sent-207, score-0.252]

83 Starting at the top of the pyramid, the multi-resolution approach estimates a depth map Di : Ωi ⊂ Z2 → R ⊂ R to each level i based on the image data at that level and the depth map from the consecutive level Di+1. [sent-213, score-0.438]

84 In particular, we apply an update scheme based on the current downsampled pixel position and three appropriate neighbors. [sent-215, score-0.192]

85 For example, for pixel (x, y) ∈ Ωi with x mod 2 = 1and y mod 2 = 1 (the remaining cases are handled analogously) we consider the following already pro- Figure 5. [sent-216, score-0.174]

86 From left to right: The reference image of a stereo pair, corresponding depth map estimated with a classical single-resolution winnertakes-all strategy and result obtained with the proposed multiresolution scheme. [sent-219, score-0.407]

87 We estimate the depth Di (x, y) by searching an appropriate range given by the minimum and maximum value in {Dli+1 |l = 0, . [sent-233, score-0.23]

88 Thereby, depth values, that are not available due to boundary constraints or the maintained image mask, are omitted. [sent-237, score-0.207]

89 As the uncertainty is expected to increase with increasing depth due to the larger jumps of the values from pixel to pixel, we use a tolerance parameter which is inversely proportional to the local depth Di0+1. [sent-239, score-0.376]

90 It should be noted that all estimated depth maps rely on a predefined range R ⊂ R which can be determined by analyzing the distribution of the sparse map constructed in the camera tracking module (see Section 5). [sent-243, score-0.518]

91 Second, when applying a winner-takesall strategy, potential mismatches can be avoided due to the more robust depth estimates at low image resolution. [sent-246, score-0.191]

92 Note that for the above example a conservative depth range was used so as to capture the entire field of view. [sent-251, score-0.15]

93 Despite the utilization of a multiresolution scheme, the developed method for dense stereo is not efficient enough to meet the requirements of the application at hand. [sent-255, score-0.151]

94 For this reason, we made use of the parallelization potential of the algorithm with a GPU implementation (based on GLSL ES), which reduces the overall runtime of the 3D modeling module to about 2-3 seconds per processed × image. [sent-256, score-0.193]

95 More concretely, we estimate depth maps at different pyramid levels in separate rendering passes. [sent-257, score-0.227]

96 Thereby, some care should be taken due to the precision limitations of current mobile GPUs. [sent-258, score-0.147]

97 We address this difficulty by using the sum of absolute differences (SAD) as a similarity measure in the matching process (over 5 5 image patches) and transferring triangulation operations to get the final depth estimates to the CPU. [sent-259, score-0.15]

98 A crucial step in binocular stereo is the choice of an appropriate image pair. [sent-261, score-0.182]

99 Instead, we propose to maintain a sliding window containing the last Nv provided keyframes (Nv = 5 in our implementation) and pick the one maximizing a suitable criterion for matching with the current view. [sent-265, score-0.272]

100 Additionally, we impose the following constraints 5◦ ≤ θjpkose ≤ 45◦,0◦ ≤ θjvkiew ≤ 45◦,0◦ ≤ θjukp ≤ 30◦ An input image is discarded and not processed if none of the images in the current sliding window satisfy those constraints with respect to it. [sent-267, score-0.177]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('inertial', 0.595), ('accelerometer', 0.277), ('keyframes', 0.172), ('depth', 0.15), ('module', 0.116), ('gravity', 0.114), ('mask', 0.111), ('gyroscope', 0.104), ('ryb', 0.104), ('rzb', 0.104), ('tracking', 0.089), ('velocity', 0.088), ('computations', 0.087), ('decaying', 0.085), ('mobile', 0.083), ('phone', 0.082), ('imu', 0.08), ('estimator', 0.077), ('acceleration', 0.076), ('stereo', 0.072), ('map', 0.069), ('jpkose', 0.069), ('jukp', 0.069), ('jvkiew', 0.069), ('mapper', 0.069), ('rxb', 0.069), ('tmie', 0.069), ('zssd', 0.069), ('binocular', 0.069), ('bundle', 0.067), ('respective', 0.066), ('current', 0.064), ('rb', 0.062), ('gpu', 0.062), ('epipolar', 0.06), ('reconstruction', 0.058), ('sensors', 0.058), ('vik', 0.057), ('maintained', 0.057), ('corners', 0.056), ('thereby', 0.055), ('camera', 0.054), ('tracker', 0.054), ('smartphones', 0.054), ('devices', 0.053), ('rays', 0.051), ('buffer', 0.051), ('keyframe', 0.051), ('measurement', 0.051), ('cos', 0.051), ('ib', 0.049), ('hz', 0.049), ('coverage', 0.049), ('whenever', 0.049), ('fig', 0.049), ('position', 0.048), ('mapping', 0.048), ('adjustment', 0.048), ('displacement', 0.047), ('mod', 0.047), ('measurements', 0.046), ('fusion', 0.046), ('slam', 0.046), ('thread', 0.044), ('dense', 0.044), ('filtered', 0.044), ('kalman', 0.043), ('scale', 0.043), ('angle', 0.043), ('pose', 0.042), ('system', 0.042), ('avoided', 0.041), ('handheld', 0.041), ('already', 0.041), ('appropriate', 0.041), ('reference', 0.041), ('di', 0.04), ('estimated', 0.04), ('offsets', 0.039), ('estimate', 0.039), ('pixel', 0.039), ('subsequently', 0.039), ('processed', 0.039), ('priority', 0.039), ('triangulated', 0.039), ('inlier', 0.039), ('pyramid', 0.038), ('scaled', 0.038), ('seconds', 0.038), ('discarded', 0.038), ('visual', 0.038), ('tolerance', 0.037), ('sliding', 0.036), ('matches', 0.036), ('patch', 0.036), ('motion', 0.035), ('multiresolution', 0.035), ('resolution', 0.035), ('view', 0.035), ('capabilities', 0.035)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones

Author: Petri Tanskanen, Kalin Kolev, Lorenz Meier, Federico Camposeco, Olivier Saurer, Marc Pollefeys

Abstract: unkown-abstract

2 0.38214794 341 iccv-2013-Real-Time Body Tracking with One Depth Camera and Inertial Sensors

Author: Thomas Helten, Meinard Müller, Hans-Peter Seidel, Christian Theobalt

Abstract: In recent years, the availability of inexpensive depth cameras, such as the Microsoft Kinect, has boosted the research in monocular full body skeletal pose tracking. Unfortunately, existing trackers often fail to capture poses where a single camera provides insufficient data, such as non-frontal poses, and all other poses with body part occlusions. In this paper, we present a novel sensor fusion approach for real-time full body tracking that succeeds in such difficult situations. It takes inspiration from previous tracking solutions, and combines a generative tracker and a discriminative tracker retrieving closest poses in a database. In contrast to previous work, both trackers employ data from a low number of inexpensive body-worn inertial sensors. These sensors provide reliable and complementary information when the monocular depth information alone is not sufficient. We also contribute by new algorithmic solutions to best fuse depth and inertial data in both trackers. One is a new visibility model to determine global body pose, occlusions and usable depth correspondences and to decide what data modality to use for discriminative tracking. We also contribute with a new inertial-basedpose retrieval, and an adapted late fusion step to calculate the final body pose.

3 0.21560037 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera

Author: Jakob Engel, Jürgen Sturm, Daniel Cremers

Abstract: We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking which does not depend on visual features while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is oflargepractical valuefor robotics and augmented reality applications. – – 1. Towards Dense Monocular Visual Odometry Tracking a hand-held camera and recovering the threedimensional structure of the environment in real-time is among the most prominent challenges in computer vision. In the last years, dense approaches to these challenges have become increasingly popular: Instead of operating solely on visual feature positions, they reconstruct and track on the whole image using a surface-based map and thereby are fundamentally different from feature-based approaches. Yet, these methods are to date either not real-time capable on standard CPUs [11, 15, 17] or require direct depth measurements from the sensor [7], making them unsuitable for many practical applications. In this paper, we propose a novel semi-dense visual odometry approach for a monocular camera, which combines the accuracy and robustness of dense approaches with the efficiency of feature-based methods. Further, it computes highly accurate semi-dense depth maps from the monocular images, providing rich information about the 3D ∗ This work was supported by the ERC Starting Grant ConvexVision and the DFG project Mapping on Demand Figure1.Semi-Dens MoncularVisualOdometry:Oucfrloas rpe- proach works on a semi-dense inverse depth map and combines the accuracy and robustness of dense visual SLAM methods with the efficiency of feature-based techniques. Left: video frame, Right: color-coded semi-dense depth map, which consists of depth estimates in all image regions with sufficient structure. structure of the environment. We use the term visual odometry as supposed to SLAM, as for simplicity we deliberately maintain only information about the currently visible scene, instead of building a global world-model. – – 1.1. Related Work Feature-based monocular SLAM. In all feature-based methods (such as [4, 8]), tracking and mapping consists of two separate steps: First, discrete feature observations (i.e., their locations in the image) are extracted and matched to each other. Second, the camera and the full feature poses are calculated from a set of such observations disregarding the images themselves. While this preliminary abstrac– tion step greatly reduces the complexity of the overall problem and allows it to be tackled in real time, it inherently comes with two significant drawbacks: First, only image information conforming to the respective feature type and parametrization typically image corners and blobs [6] or line segments [9] is utilized. Second, features have to be matched to each other, which often requires the costly computation of scale- and rotation-invariant descriptors and robust outlier estimation methods like RANSAC. – – Dense monocular SLAM. To overcome these limitations and to better exploit the available image information, dense monocular SLAM methods [11, 17] have recently been proposed. The fundamental difference to keypoint-based approaches is that these methods directly work on the images 11444499 instead of a set of extracted features, for both mapping and tracking: The world is modeled as dense surface while in turn new frames are tracked using whole-image alignment. This concept removes the need for discrete features, and allows to exploit all information present in the image, increasing tracking accuracy and robustness. To date however, doing this in real-time is only possible using modern, powerful GPU processors. Similar methods are broadly used in combination with RGB-D cameras [7], which directly measure the depth of each pixel, or stereo camera rigs [3] greatly reducing the – complexity of the problem. Dense multi-view stereo. Significant prior work exists on multi-view dense reconstruction, both in a real-time setting [13, 11, 15], as well as off-line [5, 14]. In particular for offline reconstruction, there is a long history of using different baselines to steer the stereo-inherent trade-off between accuracy and precision [12]. Most similar to our approach is the early work of Matthies et al., who proposed probabilistic depth map fusion and propagation for image sequences [10], however only for structure from motion, i.e., not coupled with subsequent dense tracking. 1.2. Contributions In this paper, we propose a novel semi-dense approach to monocular visual odometry, which does not require feature points. The key concepts are • a probabilistic depth map representation, • tracking based on whole-image alignment, • the reduction on image-regions which carry informattihoen (esdeumctii-odenn osen), i manadg • the full incorporation of stereo measurement uncertainty. To the best of our knowledge, this is the first featureless, real-time monocular visual odometry approach, which runs in real-time on a CPU. 1.3. Method Outline Our approach is partially motivated by the basic principle that for most real-time applications, video information is abundant and cheap to come by. Therefore, the computational budget should be spent such that the expected information gain is maximized. Instead of reducing the images to a sparse set of feature observations however, our method continuously estimates a semi-dense inverse depth map for the current frame, i.e., a dense depth map covering all image regions with non-negligible gradient (see Fig. 2). It is comprised of one inverse depth hypothesis per pixel modeled by a Gaussian probability distribution. This representation still allows to use whole-image alignment [7] to track new orignalimagesemi-densedepthmap(ours)clfoasre keypointdepthmap[8]densedepthmap[1 ]RGB-Dcamera[16] Figure 2. Semi-Dense Approach: Our approach reconstructs and tracks on a semi-dense inverse depth map, which is dense in all image regions carrying information (top-right). For comparison, the bottom row shows the respective result from a keypoint-based approach, a fully dense approach and the ground truth from an RGB-D camera. frames, while at the same time greatly reducing computational complexity compared to volumetric methods. The estimated depth map is propagated from frame to frame, and updated with variable-baseline stereo comparisons. We explicitly use prior knowledge about a pixel’s depth to select a suitable reference frame on a per-pixel basis, and to limit the disparity search range. The remainder of this paper is organized as follows: Section 2 describes the semi-dense mapping part of the proposed method, including the derivation of the observation accuracy as well as the probabilistic data fusion, propagation and regularization steps. Section 3 describes how new frames are tracked using whole-image alignment, and Sec. 4 summarizes the complete visual odometry method. A qualitative as well as a quantitative evaluation is presented in Sec. 5. We then give a brief conclusion in Sec. 6. 2. Semi-Dense Depth Map Estimation One of the key ideas proposed in this paper is to estimate a semi-dense inverse depth map for the current camera image, which in turn can be used for estimating the camera pose of the next frame. This depth map is continuously propagated from frame to frame, and refined with new stereo depth measurements, which are obtained by performing per-pixel, adaptive-baseline stereo comparisons. This allows us to accurately estimate the depth both of close-by and far-away image regions. In contrast to previous work that accumulates the photometric cost over a sequence of several frames [11, 15], we keep exactly one inverse depth hypothesis per pixel that we represent as Gaussian probability distribution. This section is comprised of three main parts: Sec11445500 reference small baseline medium baseline large baseline tcso0120 .050.10.150.20.2sl5m areagdleiulm0.3 inverse depth d Figure 3. Variable Baseline Stereo: Reference image (left), three stereo images at different baselines (right), and the respective matching cost functions. While a small baseline (black) gives a unique, but imprecise minimum, a large baseline (red) allows for a very precise estimate, but has many false minima. tion 2. 1 describes the stereo method used to extract new depth measurements from previous frames, and how they are incorporated into the prior depth map. In Sec. 2.2, we describe how the depth map is propagated from frame to frame. In Sec. 2.3, we detail how we partially regularize the obtained depth map in each iteration, and how outliers are handled. Throughout this section, d denotes the inverse depth of a pixel. 2.1. Stereo-Based Depth Map Update It is well known [12] that for stereo, there is a trade-off between precision and accuracy (see Fig. 3). While many multiple-baseline stereo approaches resolve this by accumulating the respective cost functions over many frames [5, 13], we propose a probabilistic approach which explicitly takes advantage of the fact that in a video, smallbaseline frames are available before large-baseline frames. The full depth map update (performed once for each new frame) consists of the following steps: First, a subset of pixels is selected for which the accuracy of a disparity search is sufficiently large. For this we use three intuitive and very efficiently computable criteria, which will be derived in Sec. 2. 1.3. For each selected pixel, we then individually select a suitable reference frame, and perform a onedimensional disparity search. Propagated prior knowledge is used to reduce the disparity search range when possible, decreasing computational cost and eliminating false minima. The obtained inverse depth estimate is then fused into the depth map. 2.1.1 Reference Frame Selection Ideally, the reference frame is chosen such that it maximizes the stereo accuracy, while keeping the disparity search range as well as the observation angle sufficiently cur ent framepixel’s “age” -4.8 s -3.9 s -3.1 s -2.2 s -1.2 s -0.8 s -0.5 s -0.4 s Figure 4. Adaptive Baseline Selection: For each pixel in the new frame (top left), a different stereo-reference frame is selected, based on how long the pixel was visible (top right: the more yellow, the older the pixel.). Some of the reference frames are displayed below, the red regions were used for stereo comparisons. small. As the stereo accuracy depends on many factors and because this selection is done for each pixel independently, we employ the following heuristic: We use the oldest frame the pixel was observed in, where the disparity search range and the observation angle do not exceed a certain threshold (see Fig. 4). If a disparity search is unsuccessful (i.e., no good match is found), the pixel’s “age” is increased, such that subsequent disparity searches use newer frames where the pixel is likely to be still visible. 2.1.2 Stereo Matching Method We perform an exhaustive search for the pixel’s intensity along the epipolar line in the selected reference frame, and then perform a sub-pixel accurate localization of the matching disparity. If a prior inverse depth hypothesis is available, the search interval is limited by d 2σd, where d and σd de,e nthoete s etharec mean avnadl ssta lnimdaiterdd d beyv dia ±tion 2σ σof the prior hypothesis. Otherwise, the full disparity range is searched. In our implementation, we use the SSD error over five equidistant points on the epipolar line: While this significantly increases robustness in high-frequent image regions, it does not change the purely one-dimensional nature of this search. Furthermore, it is computationally efficient, as 4 out ± of 5 interpolated image values can be re-used for each SSD evaluation. 2.1.3 Uncertainty Estimation In this section, we use uncertainty propagation to derive an expression for the error variance σd2 on the inverse depth d. 11445511 In general this can be done by expressing the optimal inverse depth d∗ as a function of the noisy inputs here we consider the images I0, I1 themselves, their relative orientation ξ and the camera calibration in terms of a projection function π1 – d∗ = d(I0, I1, ξ, π) . The error-variance of d∗ is then given by σd2 = JdΣJdT, (1) (2) where Jd is the Jacobian of d, and Σ the covariance of the input-error. For more details on covariance propagation, including the derivation of this formula, we refer to [2]. For simplicity, the following analysis is performed for patchfree stereo, i.e., we consider only a point-wise search for a single intensity value along the epipolar line. For this analysis, we split the computation into three steps: First, the epipolar line in the reference frame is computed. Second, the best matching position λ∗ ∈ R along it (i.e., the disparity) is determined. Third, the i∈nv eRrse al depth d∗ is computed from the disparity λ∗ . The first two steps involve two independent error sources: the geometric error, which originates from noise on ξ and π and affects the first step, and the photometric error, which originates from noise in the images I0, I1 and affects the second step. The third step scales these errors by a factor, which depends on the baseline. Geometric disparity error. The geometric error is the error ?λ on the disparity λ∗ caused by noise on ξ and π. While it would be possible to model, propagate, and estimate the complete covariance on ξ and π, we found that the gain in accuracy does not justify the increase in computational complexity. We therefore use an intuitive approximation: Let the considered epipolar line segment L ⊂ R2 be deLfineted th by L := ?l0 + λ?llyx? |λ ∈ S? , (3) where λ is the disparity with search interval S, (lx , ly)T the normalized epipolar line direction and l0 the point corresponding to infinite depth. We now assume that only the absolute position of this line segment, i.e., l0 is subject to isotropic Gaussian noise ?l . As in practice we keep the searched epipolar line segments short, the influence of rotational error is small, making this a good approximation. Intuitively, a positioning error ?l on the epipolar line causes a small disparity error ?λ if the epipolar line is parallel to the image gradient, and a large one otherwise (see Fig. 5). This can be mathematically derived as follows: The image constrains the optimal disparity λ∗ to lie on a certain isocurve, i.e. a curve of equal intensity. We approximate 1In the linear case, this is the camera matrix K – in practice however, nonlinear distortion and other (unmodeled) effects also play a role. FiguLre5.Geo?l mλetricDigs,palrityEroL?rl:Influe?nλceofgasmla posi- tioning error ?l of the epipolar line on the disparity error ?λ . The dashed line represents the isocurve on which the matching point has to lie. ?λ is small if the epipolar line is parallel to the image gradient (left), and a large otherwise (right). this isocurve to be locally linear, i.e. the gradient direction to be locally constant. This gives l0 + λ∗ ?llxy? =! + γ?−gxgy?, g0 γ ∈ R (4) where g := (gx , gy) ?is the image gradient and g0 a point on the isoline. The influence of noise on the image values will be derived in the next paragraph, hence at this point g and g0 are assumed noise-free. Solving for λ gives the optimal disparity λ∗ in terms of the noisy input l0: λ∗(l0) =?g,g?g0,−l? l0? (5) Analogously to (2), the variance of the geometric disparity error can then be expressed as σλ2(ξ,π)= Jλ∗(l0)?σ0l2 σ0l2?JλT∗(l0)=?gσ,l 2?2, (6) where g is the normalized image gradient, lthe normalized epipolar line direction and σl2 the variance of ?l. Note that this error term solely originates from noise on the relative camera orientation and the camera calibration π, i.e., it is independent of image intensity noise. ξ Photometric disparity error. Intuitively, this error encodes that small image intensity errors have a large effect on the estimated disparity if the image gradient is small, and a small effect otherwise (see Fig. 6). Mathematically, this relation can be derived as follows. We seek the disparity λ∗ that minimizes the difference in intensities, i.e., λ∗ = mλin (iref − Ip(λ))2, (7) where iref is the reference intensity, and Ip(λ) the image intensity on the epipolar line at disparity λ. We assume a good initialization λ0 to be available from the exhaustive search. Using a first-order Taylor approximation for Ip gives λ∗(I) = λ0 + (iref − Ip(λ0)) g−p1, (8) where gp is the gradient of Ip, that is image gradient along the epipolar line. For clarity we only consider noise on iref and Ip(λ0) ; equivalent results are obtained in the general case when taking into account noise on the image values involved in the computation of gp. The variance of the pho11445522 ?i Ip?λ ?iiIp?λλ Figure 6. Photometric Disparity Error: Noise ?i on the image intensity values causes a small disparity error ?λ if the image gradient along the epipolar line is large (left). If the gradient is small, the disparity error is magnified (right). tometric disparity error is given by σλ2(I) = Jλ∗(I)?σ0i2 σ0i2?Jλ∗(I) =2gσ2pi2, (9) where σi2 is the variance of the image intensity noise. The respective error originates solely from noisy image intensity values, and hence is independent of the geometric disparity error. Pixel to inverse depth conversion. Using that, for small camera rotation, the inverse depth d is approximately proportional to the disparity λ, the observation variance of the inverse depth σd2,obs can be calculated using σd2,obs = α2 ?σ2λ(ξ,π) + σλ2(I)? , (10) where the proportionality ?constant α in th?e general, nonrectified case – is different for each pixel, and can be calculated from – α :=δδdλ, (11) where δd is the length of the searched inverse depth interval, and δλ the length of the searched epipolar line segment. While α is inversely linear in the length of the camera translation, it also depends on the translation direction and the pixel’s location in the image. When using an SSD error over multiple points along the epipolar line – as our implementation does – a good upper bound for the matching uncertainty is then given by ?min{σ2λ(ξ,π)} + min{σλ2(I)}? σd2,obs-SSD ≤ α2 , (12) where the min goes over all points included in the? SSD error. 2.1.4 Depth Observation Fusion After a depth observation for a pixel in the current image has been obtained, we integrate it into the depth map as follows: If no prior hypothesis for a pixel exists, we initialize it directly with the observation. Otherwise, the new observation is incorporated into the prior, i.e., the two distribu- tions are multiplied (corresponding to the update step in a Knoailsmya onb fsieltrvera)t:io Gniv Nen(do a, pσrio2o)r, d thiest priobsutetiroionr N is( gdipv,eσnp2 b)y and a N?σ2pdσo2p++ σ σo2o2dp,σ2σpp2+σo2 σo2?. 2.1.5 (13) Summary of Uncertainty-Aware Stereo New stereo observations are obtained on a per-pixel basis, adaptively selecting for each pixel a suitable reference frame and performing a one-dimensional search along the epipolar line. We identified the three major factors which determine the accuracy of such a stereo observation, i.e., • the photometric disparity error σλ2(ξ,π), depending on tphheo magnitude sofp trhiet image gradient along the epipolar line, • the geometric disparity error σλ2(I) ,depending on the athnegl gee bometewtereinc dthisep image gradient and the epipolar line (independent of the gradient magnitude), and • the pixel to inverse depth ratio α, depending on the camera etlra tons ilantvioenrs, eth dee pfothcal r length ,a dndep tehned pixel’s position. These three simple-to-compute and purely local criteria are used to determine for which pixel a stereo update is worth the computational cost. Further, the computed observation variance is then used to integrate the new measurements into the existing depth map. 2.2. Depth Map Propagation We continuously propagate the estimated inverse depth map from frame to frame, once the camera position of the next frame has been estimated. Based on the inverse depth estimate d0 for a pixel, the corresponding 3D point is calculated and projected into the new frame, providing an inverse depth estimate d1 in the new frame. The hypothesis is then assigned to the closest integer pixel position to eliminate discretization errors, the sub-pixel accurate image location of the projected point is kept, and re-used for the next propagation step. For propagating the inverse depth variance, we assume the camera rotation to be small. The new inverse depth d1 can then be approximated by – d1(d0) = (d0−1 − tz)−1, (14) where tz is the camera translation along the optical axis. The variance of d1 is hence given by σd21= Jd1σd20JTd1+ σp2=?dd01?4σd20+ σp2, (15) where σp2 is the prediction uncertainty, which directly corresponds to the prediction step in an extended Kalman filter. It can also be interpreted as keeping the variance on 11445533 in the top right shows the new frame I2 (x) without depth information. Middle: Intermediate steps while minimizing E(ξ) on different pyramid levels. The top row shows the back-warped new frame I2 (w(x, d, ξ)), the bottom row shows the respective residual image I2 (w(x, di,ξ)) − I1 (x) . The bottom right image shows the final pixel-weights (black = small weight). Small weights mainly correspond to newly oc,cξl)ud)e −d or disoccluded pixel. tWhe z fo-cuonodrtd hina t uesi onfg a sm poailnlt v failxue ds, fo i.re. σ,p2 sedteticnrgea σsez2s0 d=rift σ,z2 a1s. it causes the estimated geometry to gradually ”lock” into place. Collision handling. At all times, we allow at most one inverse depth hypothesis per pixel: If two inverse depth hypothesis are propagated to the same pixel in the new frame, we distinguish between two cases: 1. if they are statistically similar, i.e., lie within 2σ bounds, they are treated as two independent observations of the pixel’s depth and fused according to (13). 2. otherwise, the point that is further away from the camera is assumed to be occluded, and is removed. 2.3. Depth Map Regularization For each frame – after all observations have been incorporated – we perform one regularization iteration by assign- ing each inverse depth value the average of the surrounding inverse depths, weighted by their respective inverse variance. To preserve sharp edges, if two adjacent inverse depth values are statistically different, i.e., are further away than 2σ, they do not contribute to one another. Note that the respective variances are not changed during regularization to account for the high correlation between neighboring hypotheses. Instead we use the minimal variance of all neighboring pixel when defining the stereo search range, and as a weighting factor for tracking (see Sec. 3). Outlier removal. To handle outliers, we continuously keep track of the validity of each inverse depth hypothesis in terms of the probability that it is an outlier, or has become invalid (e.g., due to occlusion or a moving object). For each successful stereo observation, this probability is decreased. It is increased for each failed stereo search, if the respective intensity changes significantly on propagation, or when the absolute image gradient falls below a given threshold. If, during regularization, the probability that all contributing neighbors are outliers i.e., the product of their individual outlier-probabilities rises above a given threshold, the hypothesis is removed. Equally, if for an “empty” pixel this product drops below a given threshold, a new hypothesis is created from the neighbors. This fills holes arising from the forward-warping nature of the propagation step, and dilates the semi-dense depth map to a small neighborhood around sharp image intensity edges, which signifi– – × cantly increases tracking and mapping robustness. 3. Dense Tracking Based on the inverse depth map of the previous frame, we estimate the camera pose of the current frame using dense image alignment. Such methods have previously been applied successfully (in real-time on a CPU) for tracking RGB-D cameras [7], which directly provide dense depth measurements along with the color image. It is based on the direct minimization of the photometric error ri (ξ) := (I2 (w(xi, di , ξ)) − I1 , (16) where the warp function w : Ω1 R R6 → Ω2 maps each point xi ∈ Ω1 in the reference× image RI1 →to Ωthe respective point w(x∈i, Ωdi, ξ) ∈ Ω2 in the new image I2. As input it requires the 3D,ξ pose Ωof the camera ξ ∈ R6 and uses the reestqiumiraetesd t hienv 3erDse p depth fd it ∈e cRa mfore rthae ξ pixel in I1. Note that no depth information with respect t toh Ie2 p i sx required. To increase robustness to self-occlusion and moving objects, we apply a weighting scheme as proposed in [7]. Further, we add the variance of the inverse depth σd2i as an additional weighting term, making the tracking resistant to recently initialized and still inaccurate depth estimates from 11445544 (xi))2 Figure 8. Examples: Top: Camera images overlaid with the respective stimated semi-dense inverse depth map. Bot om: 3D view of tracked scene. Note the versatility of our approach: It accurately reconstructs and tracks through (outside) scenes with a large depth- variance, including far-away objects like clouds , as well as (indoor) scenes with little structure and close to no image corners / keypoints. More examples are shown in the attached video. the mapping process. The final energy that is minimized is hence given by E(ξ) :=?iα(rσid2(iξ))ri(ξ), (17) where α : R → R defines the weight for a given residual. Minimizing t h→is error can b thee interpreted as computing uthale. maximum likelihood estimator for ξ, assuming independent noise on the image intensity values. The resulting weighted least-squares problem is solved efficiently using an iteratively reweighted Gauss-Newton algorithm coupled with a coarse-to-fine approach, using four pyramid levels. Figure 7 shows an example of the tracking process. For further details on the minimization we refer to [1]. 4. System Overview Tracking and depth estimation is split into two separate threads: One continuously propagates the inverse depth map to the most recent tracked frame, updates it with stereocomparisons and partially regularizes it. The other simultaneously tracks each incoming frame on the most recent available depth map. While tracking is performed in real- time at 30Hz, one complete mapping iteration takes longer and is hence done at roughly 15Hz if the map is heavily populated, we adaptively reduce the number of stereo comparisons to maintain a constant frame-rate. For stereo observations, a buffer of up to 100 past frames is kept, automatically removing those that are used least. We use a standard, keypoint-based method to obtain the relative camera pose between two initial frames, which are then used to initialize the inverse depth map needed for tracking successive frames. From this point onward, our method is entirely self-contained. In preliminary experiments, we found that in most cases our approach is even able to recover from random or extremely inaccurate initial depth maps, indicating that the keypoint-based initialization might become superfluous in the future. Table 1. Results on RGB-D Benchmark position drift (cm/s) rotation drift (deg/s) ours [7] [8] ours [7] [8] – fr2/xyz fr2/desk 0.6 2.1 0.6 2.0 8.2 - 0.33 0.65 0.34 0.70 3.27 - 5. Results We have tested our approach on both publicly available benchmark sequences, as well as live, using a hand-held camera. Some examples are shown in Fig. 8. Note that our method does not attempt to build a global map, i.e., once a point leaves the field of view of the camera or becomes occluded, the respective depth value is deleted. All experiments are performed on a standard consumer laptop with Intel i7 quad-core CPU. In a preprocessing step, we rectify all images such that a pinhole camera-model can be applied. 5.1. RGB-D Benchmark Sequences As basis for a quantitative evaluation and to facilitate reproducibility and easy comparison with other methods, we use the TUM RGB-D benchmark [16]. For tracking and mapping we only use the gray-scale images; for the very first frame however the provided depth image is used as initialization. Our method (like any monocular visual odometry method) fails in case of pure camera rotation, as the depth of new regions cannot be determined. The achieved tracking accuracy for two feasible sequences that is, sequences which do not contain strong camera rotation without simultaneous translation is given in Table 1. For comparison we also list the accuracy from (1) a state-of-the-art, dense RGB-D odometry [7], and (2) a state-of-the-art, keypointbased monocular SLAM system (PTAM, [8]). We initialize PTAM using the built-in stereo initializer, and perform a 7DoF (rigid body plus scale) alignment to the ground truth trajectory. Figure 9 shows the tracked camera trajectory for fr2/desk. We found that our method achieves similar accu– – 11445555 era the the the trajectory (black), the depth map of the first frame (blue), and estimated depth map (gray-scale) after a complete loop around table. Note how well certain details such as the keyboard and monitor align. racy as [7] which uses the same dense tracking algorithm but relies on the Kinect depth images. The keypoint-based approach [8] proves to be significantly less accurate and robust; it consistently failed after a few seconds for the second sequence. 5.2. Additional Test Sequences To analyze our approach in more detail, we recorded additional challenging sequences with the corresponding ground truth trajectory in a motion capture studio. Figure 10 shows an extract from the video, as well as the tracked and the ground-truth camera position over time. As can be seen from the figure, our approach is able to maintain a reasonably dense depth map at all times and the estimated camera trajectory matches closely the ground truth. 6. Conclusion In this paper we proposed a novel visual odometry method for a monocular camera, which does not require discrete features. In contrast to previous work on dense tracking and mapping, our approach is based on probabilistic depth map estimation and fusion over time. Depth measurements are obtained from patch-free stereo matching in different reference frames at a suitable baseline, which are selected on a per-pixel basis. To our knowledge, this is the first featureless monocular visual odometry method which runs in real-time on a CPU. In our experiments, we showed that the tracking performance of our approach is comparable to that of fully dense methods without requiring a depth sensor. References [1] S. Baker and I. Matthews. Lucas-Kanade 20 years on: A unifying framework. Technical report, Carnegie Mellon Univ., 2002. 7 [2] A. Clifford. Multivariate Error Analysis. John Wiley & Sons, 1973. 4 sionpito[m ]− 024 2 0 s1xzy0s20s30s40s50s60s Figure 10. Additional Sequence: Estimated camera trajectory and ground truth (dashed) for a long and challenging sequence. The complete sequence is shown in the attached video. [3] A. Comport, E. Malis, and P. Rives. Accurate quadri-focal tracking for robust 3d visual odometry. In ICRA, 2007. 2 [4] A. Davison, I. Reid, N. Molton, and O. Stasse. MonoSLAM: Real-time single camera SLAM. Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 29, 2007. 1 [5] D. Gallup, J. Frahm, P. Mordohai, and M. Pollefeys. Variable baseline/resolution stereo. In CVPR, 2008. 2, 3 [6] C. Harris and M. Stephens. A combined corner and edge detector. In Alvey Vision Conference, 1988. 1 [7] C. Kerl, J. Sturm, and D. Cremers. Robust odometry estimation for RGB-D cameras. In ICRA, 2013. 1, 2, 6, 7, 8 [8] G. Klein and D. Murray. Parallel tracking and mapping for small AR workspaces. In Mixed and Augmented Reality (ISMAR), 2007. 1, 2, 7, 8 [9] G. Klein and D. Murray. Improving the agility of keyframebased SLAM. In ECCV, 2008. 1 [10] M. Pollefes et al. Detailed real-time urban 3d reconstruction from video. IJCV, 78(2-3): 143–167, 2008. 2, 3 [11] L. Matthies, R. Szeliski, and T. Kanade. Incremental estimation of dense depth maps from image image sequences. In CVPR, 1988. 2 [12] R. Newcombe, S. Lovegrove, and A. Davison. DTAM: Dense tracking and mapping in real-time. In ICCV, 2011. 1, 2 [13] M. Okutomi and T. Kanade. A multiple-baseline stereo. Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 15(4):353–363, 1993. 2, 3 [14] T. Sato, M. Kanbara, N. Yokoya, and H. Takemura. Dense 3-d reconstruction of an outdoor scene by hundreds-baseline stereo using a hand-held camera. IJCV, 47: 1–3, 2002. 2 [15] J. Stuehmer, S. Gumhold, and D. Cremers. Real-time dense geometry from a handheld camera. In Pattern Recognition (DAGM), 2010. 1, 2 [16] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A benchmark for the evaluation of RGB-D SLAM systems. In Intelligent Robot Systems (IROS), 2012. 2, 7 [17] A. Wendel, M. Maurer, G. Graber, T. Pock, and H. Bischof. Dense reconstruction on-the-fly. In ECCV, 2012. 1 11445566

4 0.15690939 367 iccv-2013-SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels

Author: Jianxiong Xiao, Andrew Owens, Antonio Torralba

Abstract: Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all avail– able at http://sun3d.cs.princeton.edu.

5 0.14180182 366 iccv-2013-STAR3D: Simultaneous Tracking and Reconstruction of 3D Objects Using RGB-D Data

Author: Carl Yuheng Ren, Victor Prisacariu, David Murray, Ian Reid

Abstract: We introduce a probabilistic framework for simultaneous tracking and reconstruction of 3D rigid objects using an RGB-D camera. The tracking problem is handled using a bag-of-pixels representation and a back-projection scheme. Surface and background appearance models are learned online, leading to robust tracking in the presence of heavy occlusion and outliers. In both our tracking and reconstruction modules, the 3D object is implicitly embedded using a 3D level-set function. The framework is initialized with a simple shape primitive model (e.g. a sphere or a cube), and the real 3D object shape is tracked and reconstructed online. Unlike existing depth-based 3D reconstruction works, which either rely on calibrated/fixed camera set up or use the observed world map to track the depth camera, our framework can simultaneously track and reconstruct small moving objects. We use both qualitative and quantitative results to demonstrate the superior performance of both tracking and reconstruction of our method.

6 0.13458328 444 iccv-2013-Viewing Real-World Faces in 3D

7 0.12382463 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera

8 0.12219074 209 iccv-2013-Image Guided Depth Upsampling Using Anisotropic Total Generalized Variation

9 0.10649186 133 iccv-2013-Efficient Hand Pose Estimation from a Single Depth Image

10 0.10538581 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects

11 0.1034976 18 iccv-2013-A Joint Intensity and Depth Co-sparse Analysis Model for Depth Map Super-resolution

12 0.1025841 424 iccv-2013-Tracking Revisited Using RGBD Camera: Unified Benchmark and Baselines

13 0.093850076 402 iccv-2013-Street View Motion-from-Structure-from-Motion

14 0.092735238 317 iccv-2013-Piecewise Rigid Scene Flow

15 0.09027978 111 iccv-2013-Detecting Dynamic Objects with Multi-view Background Subtraction

16 0.089373901 228 iccv-2013-Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences

17 0.087744161 218 iccv-2013-Interactive Markerless Articulated Hand Motion Tracking Using RGB and Depth Data

18 0.087116264 108 iccv-2013-Depth from Combining Defocus and Correspondence Using Light-Field Cameras

19 0.085475713 12 iccv-2013-A General Dense Image Matching Framework Combining Direct and Feature-Based Costs

20 0.084034286 79 iccv-2013-Coherent Object Detection with 3D Geometric Context from a Single Image


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.213), (1, -0.186), (2, -0.026), (3, 0.051), (4, 0.017), (5, -0.044), (6, -0.004), (7, -0.096), (8, -0.067), (9, 0.127), (10, 0.0), (11, -0.054), (12, -0.039), (13, 0.027), (14, -0.039), (15, -0.032), (16, 0.01), (17, -0.096), (18, -0.012), (19, 0.046), (20, 0.001), (21, 0.032), (22, -0.069), (23, 0.039), (24, 0.007), (25, -0.005), (26, -0.015), (27, 0.061), (28, -0.007), (29, 0.085), (30, -0.02), (31, -0.054), (32, -0.062), (33, 0.04), (34, -0.051), (35, -0.022), (36, 0.061), (37, 0.012), (38, 0.034), (39, 0.037), (40, -0.013), (41, -0.003), (42, 0.045), (43, 0.011), (44, -0.03), (45, -0.041), (46, 0.031), (47, 0.013), (48, -0.019), (49, 0.117)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95190686 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones

Author: Petri Tanskanen, Kalin Kolev, Lorenz Meier, Federico Camposeco, Olivier Saurer, Marc Pollefeys

Abstract: unkown-abstract

2 0.86086625 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera

Author: Jakob Engel, Jürgen Sturm, Daniel Cremers

Abstract: We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking which does not depend on visual features while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is oflargepractical valuefor robotics and augmented reality applications. – – 1. Towards Dense Monocular Visual Odometry Tracking a hand-held camera and recovering the threedimensional structure of the environment in real-time is among the most prominent challenges in computer vision. In the last years, dense approaches to these challenges have become increasingly popular: Instead of operating solely on visual feature positions, they reconstruct and track on the whole image using a surface-based map and thereby are fundamentally different from feature-based approaches. Yet, these methods are to date either not real-time capable on standard CPUs [11, 15, 17] or require direct depth measurements from the sensor [7], making them unsuitable for many practical applications. In this paper, we propose a novel semi-dense visual odometry approach for a monocular camera, which combines the accuracy and robustness of dense approaches with the efficiency of feature-based methods. Further, it computes highly accurate semi-dense depth maps from the monocular images, providing rich information about the 3D ∗ This work was supported by the ERC Starting Grant ConvexVision and the DFG project Mapping on Demand Figure1.Semi-Dens MoncularVisualOdometry:Oucfrloas rpe- proach works on a semi-dense inverse depth map and combines the accuracy and robustness of dense visual SLAM methods with the efficiency of feature-based techniques. Left: video frame, Right: color-coded semi-dense depth map, which consists of depth estimates in all image regions with sufficient structure. structure of the environment. We use the term visual odometry as supposed to SLAM, as for simplicity we deliberately maintain only information about the currently visible scene, instead of building a global world-model. – – 1.1. Related Work Feature-based monocular SLAM. In all feature-based methods (such as [4, 8]), tracking and mapping consists of two separate steps: First, discrete feature observations (i.e., their locations in the image) are extracted and matched to each other. Second, the camera and the full feature poses are calculated from a set of such observations disregarding the images themselves. While this preliminary abstrac– tion step greatly reduces the complexity of the overall problem and allows it to be tackled in real time, it inherently comes with two significant drawbacks: First, only image information conforming to the respective feature type and parametrization typically image corners and blobs [6] or line segments [9] is utilized. Second, features have to be matched to each other, which often requires the costly computation of scale- and rotation-invariant descriptors and robust outlier estimation methods like RANSAC. – – Dense monocular SLAM. To overcome these limitations and to better exploit the available image information, dense monocular SLAM methods [11, 17] have recently been proposed. The fundamental difference to keypoint-based approaches is that these methods directly work on the images 11444499 instead of a set of extracted features, for both mapping and tracking: The world is modeled as dense surface while in turn new frames are tracked using whole-image alignment. This concept removes the need for discrete features, and allows to exploit all information present in the image, increasing tracking accuracy and robustness. To date however, doing this in real-time is only possible using modern, powerful GPU processors. Similar methods are broadly used in combination with RGB-D cameras [7], which directly measure the depth of each pixel, or stereo camera rigs [3] greatly reducing the – complexity of the problem. Dense multi-view stereo. Significant prior work exists on multi-view dense reconstruction, both in a real-time setting [13, 11, 15], as well as off-line [5, 14]. In particular for offline reconstruction, there is a long history of using different baselines to steer the stereo-inherent trade-off between accuracy and precision [12]. Most similar to our approach is the early work of Matthies et al., who proposed probabilistic depth map fusion and propagation for image sequences [10], however only for structure from motion, i.e., not coupled with subsequent dense tracking. 1.2. Contributions In this paper, we propose a novel semi-dense approach to monocular visual odometry, which does not require feature points. The key concepts are • a probabilistic depth map representation, • tracking based on whole-image alignment, • the reduction on image-regions which carry informattihoen (esdeumctii-odenn osen), i manadg • the full incorporation of stereo measurement uncertainty. To the best of our knowledge, this is the first featureless, real-time monocular visual odometry approach, which runs in real-time on a CPU. 1.3. Method Outline Our approach is partially motivated by the basic principle that for most real-time applications, video information is abundant and cheap to come by. Therefore, the computational budget should be spent such that the expected information gain is maximized. Instead of reducing the images to a sparse set of feature observations however, our method continuously estimates a semi-dense inverse depth map for the current frame, i.e., a dense depth map covering all image regions with non-negligible gradient (see Fig. 2). It is comprised of one inverse depth hypothesis per pixel modeled by a Gaussian probability distribution. This representation still allows to use whole-image alignment [7] to track new orignalimagesemi-densedepthmap(ours)clfoasre keypointdepthmap[8]densedepthmap[1 ]RGB-Dcamera[16] Figure 2. Semi-Dense Approach: Our approach reconstructs and tracks on a semi-dense inverse depth map, which is dense in all image regions carrying information (top-right). For comparison, the bottom row shows the respective result from a keypoint-based approach, a fully dense approach and the ground truth from an RGB-D camera. frames, while at the same time greatly reducing computational complexity compared to volumetric methods. The estimated depth map is propagated from frame to frame, and updated with variable-baseline stereo comparisons. We explicitly use prior knowledge about a pixel’s depth to select a suitable reference frame on a per-pixel basis, and to limit the disparity search range. The remainder of this paper is organized as follows: Section 2 describes the semi-dense mapping part of the proposed method, including the derivation of the observation accuracy as well as the probabilistic data fusion, propagation and regularization steps. Section 3 describes how new frames are tracked using whole-image alignment, and Sec. 4 summarizes the complete visual odometry method. A qualitative as well as a quantitative evaluation is presented in Sec. 5. We then give a brief conclusion in Sec. 6. 2. Semi-Dense Depth Map Estimation One of the key ideas proposed in this paper is to estimate a semi-dense inverse depth map for the current camera image, which in turn can be used for estimating the camera pose of the next frame. This depth map is continuously propagated from frame to frame, and refined with new stereo depth measurements, which are obtained by performing per-pixel, adaptive-baseline stereo comparisons. This allows us to accurately estimate the depth both of close-by and far-away image regions. In contrast to previous work that accumulates the photometric cost over a sequence of several frames [11, 15], we keep exactly one inverse depth hypothesis per pixel that we represent as Gaussian probability distribution. This section is comprised of three main parts: Sec11445500 reference small baseline medium baseline large baseline tcso0120 .050.10.150.20.2sl5m areagdleiulm0.3 inverse depth d Figure 3. Variable Baseline Stereo: Reference image (left), three stereo images at different baselines (right), and the respective matching cost functions. While a small baseline (black) gives a unique, but imprecise minimum, a large baseline (red) allows for a very precise estimate, but has many false minima. tion 2. 1 describes the stereo method used to extract new depth measurements from previous frames, and how they are incorporated into the prior depth map. In Sec. 2.2, we describe how the depth map is propagated from frame to frame. In Sec. 2.3, we detail how we partially regularize the obtained depth map in each iteration, and how outliers are handled. Throughout this section, d denotes the inverse depth of a pixel. 2.1. Stereo-Based Depth Map Update It is well known [12] that for stereo, there is a trade-off between precision and accuracy (see Fig. 3). While many multiple-baseline stereo approaches resolve this by accumulating the respective cost functions over many frames [5, 13], we propose a probabilistic approach which explicitly takes advantage of the fact that in a video, smallbaseline frames are available before large-baseline frames. The full depth map update (performed once for each new frame) consists of the following steps: First, a subset of pixels is selected for which the accuracy of a disparity search is sufficiently large. For this we use three intuitive and very efficiently computable criteria, which will be derived in Sec. 2. 1.3. For each selected pixel, we then individually select a suitable reference frame, and perform a onedimensional disparity search. Propagated prior knowledge is used to reduce the disparity search range when possible, decreasing computational cost and eliminating false minima. The obtained inverse depth estimate is then fused into the depth map. 2.1.1 Reference Frame Selection Ideally, the reference frame is chosen such that it maximizes the stereo accuracy, while keeping the disparity search range as well as the observation angle sufficiently cur ent framepixel’s “age” -4.8 s -3.9 s -3.1 s -2.2 s -1.2 s -0.8 s -0.5 s -0.4 s Figure 4. Adaptive Baseline Selection: For each pixel in the new frame (top left), a different stereo-reference frame is selected, based on how long the pixel was visible (top right: the more yellow, the older the pixel.). Some of the reference frames are displayed below, the red regions were used for stereo comparisons. small. As the stereo accuracy depends on many factors and because this selection is done for each pixel independently, we employ the following heuristic: We use the oldest frame the pixel was observed in, where the disparity search range and the observation angle do not exceed a certain threshold (see Fig. 4). If a disparity search is unsuccessful (i.e., no good match is found), the pixel’s “age” is increased, such that subsequent disparity searches use newer frames where the pixel is likely to be still visible. 2.1.2 Stereo Matching Method We perform an exhaustive search for the pixel’s intensity along the epipolar line in the selected reference frame, and then perform a sub-pixel accurate localization of the matching disparity. If a prior inverse depth hypothesis is available, the search interval is limited by d 2σd, where d and σd de,e nthoete s etharec mean avnadl ssta lnimdaiterdd d beyv dia ±tion 2σ σof the prior hypothesis. Otherwise, the full disparity range is searched. In our implementation, we use the SSD error over five equidistant points on the epipolar line: While this significantly increases robustness in high-frequent image regions, it does not change the purely one-dimensional nature of this search. Furthermore, it is computationally efficient, as 4 out ± of 5 interpolated image values can be re-used for each SSD evaluation. 2.1.3 Uncertainty Estimation In this section, we use uncertainty propagation to derive an expression for the error variance σd2 on the inverse depth d. 11445511 In general this can be done by expressing the optimal inverse depth d∗ as a function of the noisy inputs here we consider the images I0, I1 themselves, their relative orientation ξ and the camera calibration in terms of a projection function π1 – d∗ = d(I0, I1, ξ, π) . The error-variance of d∗ is then given by σd2 = JdΣJdT, (1) (2) where Jd is the Jacobian of d, and Σ the covariance of the input-error. For more details on covariance propagation, including the derivation of this formula, we refer to [2]. For simplicity, the following analysis is performed for patchfree stereo, i.e., we consider only a point-wise search for a single intensity value along the epipolar line. For this analysis, we split the computation into three steps: First, the epipolar line in the reference frame is computed. Second, the best matching position λ∗ ∈ R along it (i.e., the disparity) is determined. Third, the i∈nv eRrse al depth d∗ is computed from the disparity λ∗ . The first two steps involve two independent error sources: the geometric error, which originates from noise on ξ and π and affects the first step, and the photometric error, which originates from noise in the images I0, I1 and affects the second step. The third step scales these errors by a factor, which depends on the baseline. Geometric disparity error. The geometric error is the error ?λ on the disparity λ∗ caused by noise on ξ and π. While it would be possible to model, propagate, and estimate the complete covariance on ξ and π, we found that the gain in accuracy does not justify the increase in computational complexity. We therefore use an intuitive approximation: Let the considered epipolar line segment L ⊂ R2 be deLfineted th by L := ?l0 + λ?llyx? |λ ∈ S? , (3) where λ is the disparity with search interval S, (lx , ly)T the normalized epipolar line direction and l0 the point corresponding to infinite depth. We now assume that only the absolute position of this line segment, i.e., l0 is subject to isotropic Gaussian noise ?l . As in practice we keep the searched epipolar line segments short, the influence of rotational error is small, making this a good approximation. Intuitively, a positioning error ?l on the epipolar line causes a small disparity error ?λ if the epipolar line is parallel to the image gradient, and a large one otherwise (see Fig. 5). This can be mathematically derived as follows: The image constrains the optimal disparity λ∗ to lie on a certain isocurve, i.e. a curve of equal intensity. We approximate 1In the linear case, this is the camera matrix K – in practice however, nonlinear distortion and other (unmodeled) effects also play a role. FiguLre5.Geo?l mλetricDigs,palrityEroL?rl:Influe?nλceofgasmla posi- tioning error ?l of the epipolar line on the disparity error ?λ . The dashed line represents the isocurve on which the matching point has to lie. ?λ is small if the epipolar line is parallel to the image gradient (left), and a large otherwise (right). this isocurve to be locally linear, i.e. the gradient direction to be locally constant. This gives l0 + λ∗ ?llxy? =! + γ?−gxgy?, g0 γ ∈ R (4) where g := (gx , gy) ?is the image gradient and g0 a point on the isoline. The influence of noise on the image values will be derived in the next paragraph, hence at this point g and g0 are assumed noise-free. Solving for λ gives the optimal disparity λ∗ in terms of the noisy input l0: λ∗(l0) =?g,g?g0,−l? l0? (5) Analogously to (2), the variance of the geometric disparity error can then be expressed as σλ2(ξ,π)= Jλ∗(l0)?σ0l2 σ0l2?JλT∗(l0)=?gσ,l 2?2, (6) where g is the normalized image gradient, lthe normalized epipolar line direction and σl2 the variance of ?l. Note that this error term solely originates from noise on the relative camera orientation and the camera calibration π, i.e., it is independent of image intensity noise. ξ Photometric disparity error. Intuitively, this error encodes that small image intensity errors have a large effect on the estimated disparity if the image gradient is small, and a small effect otherwise (see Fig. 6). Mathematically, this relation can be derived as follows. We seek the disparity λ∗ that minimizes the difference in intensities, i.e., λ∗ = mλin (iref − Ip(λ))2, (7) where iref is the reference intensity, and Ip(λ) the image intensity on the epipolar line at disparity λ. We assume a good initialization λ0 to be available from the exhaustive search. Using a first-order Taylor approximation for Ip gives λ∗(I) = λ0 + (iref − Ip(λ0)) g−p1, (8) where gp is the gradient of Ip, that is image gradient along the epipolar line. For clarity we only consider noise on iref and Ip(λ0) ; equivalent results are obtained in the general case when taking into account noise on the image values involved in the computation of gp. The variance of the pho11445522 ?i Ip?λ ?iiIp?λλ Figure 6. Photometric Disparity Error: Noise ?i on the image intensity values causes a small disparity error ?λ if the image gradient along the epipolar line is large (left). If the gradient is small, the disparity error is magnified (right). tometric disparity error is given by σλ2(I) = Jλ∗(I)?σ0i2 σ0i2?Jλ∗(I) =2gσ2pi2, (9) where σi2 is the variance of the image intensity noise. The respective error originates solely from noisy image intensity values, and hence is independent of the geometric disparity error. Pixel to inverse depth conversion. Using that, for small camera rotation, the inverse depth d is approximately proportional to the disparity λ, the observation variance of the inverse depth σd2,obs can be calculated using σd2,obs = α2 ?σ2λ(ξ,π) + σλ2(I)? , (10) where the proportionality ?constant α in th?e general, nonrectified case – is different for each pixel, and can be calculated from – α :=δδdλ, (11) where δd is the length of the searched inverse depth interval, and δλ the length of the searched epipolar line segment. While α is inversely linear in the length of the camera translation, it also depends on the translation direction and the pixel’s location in the image. When using an SSD error over multiple points along the epipolar line – as our implementation does – a good upper bound for the matching uncertainty is then given by ?min{σ2λ(ξ,π)} + min{σλ2(I)}? σd2,obs-SSD ≤ α2 , (12) where the min goes over all points included in the? SSD error. 2.1.4 Depth Observation Fusion After a depth observation for a pixel in the current image has been obtained, we integrate it into the depth map as follows: If no prior hypothesis for a pixel exists, we initialize it directly with the observation. Otherwise, the new observation is incorporated into the prior, i.e., the two distribu- tions are multiplied (corresponding to the update step in a Knoailsmya onb fsieltrvera)t:io Gniv Nen(do a, pσrio2o)r, d thiest priobsutetiroionr N is( gdipv,eσnp2 b)y and a N?σ2pdσo2p++ σ σo2o2dp,σ2σpp2+σo2 σo2?. 2.1.5 (13) Summary of Uncertainty-Aware Stereo New stereo observations are obtained on a per-pixel basis, adaptively selecting for each pixel a suitable reference frame and performing a one-dimensional search along the epipolar line. We identified the three major factors which determine the accuracy of such a stereo observation, i.e., • the photometric disparity error σλ2(ξ,π), depending on tphheo magnitude sofp trhiet image gradient along the epipolar line, • the geometric disparity error σλ2(I) ,depending on the athnegl gee bometewtereinc dthisep image gradient and the epipolar line (independent of the gradient magnitude), and • the pixel to inverse depth ratio α, depending on the camera etlra tons ilantvioenrs, eth dee pfothcal r length ,a dndep tehned pixel’s position. These three simple-to-compute and purely local criteria are used to determine for which pixel a stereo update is worth the computational cost. Further, the computed observation variance is then used to integrate the new measurements into the existing depth map. 2.2. Depth Map Propagation We continuously propagate the estimated inverse depth map from frame to frame, once the camera position of the next frame has been estimated. Based on the inverse depth estimate d0 for a pixel, the corresponding 3D point is calculated and projected into the new frame, providing an inverse depth estimate d1 in the new frame. The hypothesis is then assigned to the closest integer pixel position to eliminate discretization errors, the sub-pixel accurate image location of the projected point is kept, and re-used for the next propagation step. For propagating the inverse depth variance, we assume the camera rotation to be small. The new inverse depth d1 can then be approximated by – d1(d0) = (d0−1 − tz)−1, (14) where tz is the camera translation along the optical axis. The variance of d1 is hence given by σd21= Jd1σd20JTd1+ σp2=?dd01?4σd20+ σp2, (15) where σp2 is the prediction uncertainty, which directly corresponds to the prediction step in an extended Kalman filter. It can also be interpreted as keeping the variance on 11445533 in the top right shows the new frame I2 (x) without depth information. Middle: Intermediate steps while minimizing E(ξ) on different pyramid levels. The top row shows the back-warped new frame I2 (w(x, d, ξ)), the bottom row shows the respective residual image I2 (w(x, di,ξ)) − I1 (x) . The bottom right image shows the final pixel-weights (black = small weight). Small weights mainly correspond to newly oc,cξl)ud)e −d or disoccluded pixel. tWhe z fo-cuonodrtd hina t uesi onfg a sm poailnlt v failxue ds, fo i.re. σ,p2 sedteticnrgea σsez2s0 d=rift σ,z2 a1s. it causes the estimated geometry to gradually ”lock” into place. Collision handling. At all times, we allow at most one inverse depth hypothesis per pixel: If two inverse depth hypothesis are propagated to the same pixel in the new frame, we distinguish between two cases: 1. if they are statistically similar, i.e., lie within 2σ bounds, they are treated as two independent observations of the pixel’s depth and fused according to (13). 2. otherwise, the point that is further away from the camera is assumed to be occluded, and is removed. 2.3. Depth Map Regularization For each frame – after all observations have been incorporated – we perform one regularization iteration by assign- ing each inverse depth value the average of the surrounding inverse depths, weighted by their respective inverse variance. To preserve sharp edges, if two adjacent inverse depth values are statistically different, i.e., are further away than 2σ, they do not contribute to one another. Note that the respective variances are not changed during regularization to account for the high correlation between neighboring hypotheses. Instead we use the minimal variance of all neighboring pixel when defining the stereo search range, and as a weighting factor for tracking (see Sec. 3). Outlier removal. To handle outliers, we continuously keep track of the validity of each inverse depth hypothesis in terms of the probability that it is an outlier, or has become invalid (e.g., due to occlusion or a moving object). For each successful stereo observation, this probability is decreased. It is increased for each failed stereo search, if the respective intensity changes significantly on propagation, or when the absolute image gradient falls below a given threshold. If, during regularization, the probability that all contributing neighbors are outliers i.e., the product of their individual outlier-probabilities rises above a given threshold, the hypothesis is removed. Equally, if for an “empty” pixel this product drops below a given threshold, a new hypothesis is created from the neighbors. This fills holes arising from the forward-warping nature of the propagation step, and dilates the semi-dense depth map to a small neighborhood around sharp image intensity edges, which signifi– – × cantly increases tracking and mapping robustness. 3. Dense Tracking Based on the inverse depth map of the previous frame, we estimate the camera pose of the current frame using dense image alignment. Such methods have previously been applied successfully (in real-time on a CPU) for tracking RGB-D cameras [7], which directly provide dense depth measurements along with the color image. It is based on the direct minimization of the photometric error ri (ξ) := (I2 (w(xi, di , ξ)) − I1 , (16) where the warp function w : Ω1 R R6 → Ω2 maps each point xi ∈ Ω1 in the reference× image RI1 →to Ωthe respective point w(x∈i, Ωdi, ξ) ∈ Ω2 in the new image I2. As input it requires the 3D,ξ pose Ωof the camera ξ ∈ R6 and uses the reestqiumiraetesd t hienv 3erDse p depth fd it ∈e cRa mfore rthae ξ pixel in I1. Note that no depth information with respect t toh Ie2 p i sx required. To increase robustness to self-occlusion and moving objects, we apply a weighting scheme as proposed in [7]. Further, we add the variance of the inverse depth σd2i as an additional weighting term, making the tracking resistant to recently initialized and still inaccurate depth estimates from 11445544 (xi))2 Figure 8. Examples: Top: Camera images overlaid with the respective stimated semi-dense inverse depth map. Bot om: 3D view of tracked scene. Note the versatility of our approach: It accurately reconstructs and tracks through (outside) scenes with a large depth- variance, including far-away objects like clouds , as well as (indoor) scenes with little structure and close to no image corners / keypoints. More examples are shown in the attached video. the mapping process. The final energy that is minimized is hence given by E(ξ) :=?iα(rσid2(iξ))ri(ξ), (17) where α : R → R defines the weight for a given residual. Minimizing t h→is error can b thee interpreted as computing uthale. maximum likelihood estimator for ξ, assuming independent noise on the image intensity values. The resulting weighted least-squares problem is solved efficiently using an iteratively reweighted Gauss-Newton algorithm coupled with a coarse-to-fine approach, using four pyramid levels. Figure 7 shows an example of the tracking process. For further details on the minimization we refer to [1]. 4. System Overview Tracking and depth estimation is split into two separate threads: One continuously propagates the inverse depth map to the most recent tracked frame, updates it with stereocomparisons and partially regularizes it. The other simultaneously tracks each incoming frame on the most recent available depth map. While tracking is performed in real- time at 30Hz, one complete mapping iteration takes longer and is hence done at roughly 15Hz if the map is heavily populated, we adaptively reduce the number of stereo comparisons to maintain a constant frame-rate. For stereo observations, a buffer of up to 100 past frames is kept, automatically removing those that are used least. We use a standard, keypoint-based method to obtain the relative camera pose between two initial frames, which are then used to initialize the inverse depth map needed for tracking successive frames. From this point onward, our method is entirely self-contained. In preliminary experiments, we found that in most cases our approach is even able to recover from random or extremely inaccurate initial depth maps, indicating that the keypoint-based initialization might become superfluous in the future. Table 1. Results on RGB-D Benchmark position drift (cm/s) rotation drift (deg/s) ours [7] [8] ours [7] [8] – fr2/xyz fr2/desk 0.6 2.1 0.6 2.0 8.2 - 0.33 0.65 0.34 0.70 3.27 - 5. Results We have tested our approach on both publicly available benchmark sequences, as well as live, using a hand-held camera. Some examples are shown in Fig. 8. Note that our method does not attempt to build a global map, i.e., once a point leaves the field of view of the camera or becomes occluded, the respective depth value is deleted. All experiments are performed on a standard consumer laptop with Intel i7 quad-core CPU. In a preprocessing step, we rectify all images such that a pinhole camera-model can be applied. 5.1. RGB-D Benchmark Sequences As basis for a quantitative evaluation and to facilitate reproducibility and easy comparison with other methods, we use the TUM RGB-D benchmark [16]. For tracking and mapping we only use the gray-scale images; for the very first frame however the provided depth image is used as initialization. Our method (like any monocular visual odometry method) fails in case of pure camera rotation, as the depth of new regions cannot be determined. The achieved tracking accuracy for two feasible sequences that is, sequences which do not contain strong camera rotation without simultaneous translation is given in Table 1. For comparison we also list the accuracy from (1) a state-of-the-art, dense RGB-D odometry [7], and (2) a state-of-the-art, keypointbased monocular SLAM system (PTAM, [8]). We initialize PTAM using the built-in stereo initializer, and perform a 7DoF (rigid body plus scale) alignment to the ground truth trajectory. Figure 9 shows the tracked camera trajectory for fr2/desk. We found that our method achieves similar accu– – 11445555 era the the the trajectory (black), the depth map of the first frame (blue), and estimated depth map (gray-scale) after a complete loop around table. Note how well certain details such as the keyboard and monitor align. racy as [7] which uses the same dense tracking algorithm but relies on the Kinect depth images. The keypoint-based approach [8] proves to be significantly less accurate and robust; it consistently failed after a few seconds for the second sequence. 5.2. Additional Test Sequences To analyze our approach in more detail, we recorded additional challenging sequences with the corresponding ground truth trajectory in a motion capture studio. Figure 10 shows an extract from the video, as well as the tracked and the ground-truth camera position over time. As can be seen from the figure, our approach is able to maintain a reasonably dense depth map at all times and the estimated camera trajectory matches closely the ground truth. 6. Conclusion In this paper we proposed a novel visual odometry method for a monocular camera, which does not require discrete features. In contrast to previous work on dense tracking and mapping, our approach is based on probabilistic depth map estimation and fusion over time. Depth measurements are obtained from patch-free stereo matching in different reference frames at a suitable baseline, which are selected on a per-pixel basis. To our knowledge, this is the first featureless monocular visual odometry method which runs in real-time on a CPU. In our experiments, we showed that the tracking performance of our approach is comparable to that of fully dense methods without requiring a depth sensor. References [1] S. Baker and I. Matthews. Lucas-Kanade 20 years on: A unifying framework. Technical report, Carnegie Mellon Univ., 2002. 7 [2] A. Clifford. Multivariate Error Analysis. John Wiley & Sons, 1973. 4 sionpito[m ]− 024 2 0 s1xzy0s20s30s40s50s60s Figure 10. Additional Sequence: Estimated camera trajectory and ground truth (dashed) for a long and challenging sequence. The complete sequence is shown in the attached video. [3] A. Comport, E. Malis, and P. Rives. Accurate quadri-focal tracking for robust 3d visual odometry. In ICRA, 2007. 2 [4] A. Davison, I. Reid, N. Molton, and O. Stasse. MonoSLAM: Real-time single camera SLAM. Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 29, 2007. 1 [5] D. Gallup, J. Frahm, P. Mordohai, and M. Pollefeys. Variable baseline/resolution stereo. In CVPR, 2008. 2, 3 [6] C. Harris and M. Stephens. A combined corner and edge detector. In Alvey Vision Conference, 1988. 1 [7] C. Kerl, J. Sturm, and D. Cremers. Robust odometry estimation for RGB-D cameras. In ICRA, 2013. 1, 2, 6, 7, 8 [8] G. Klein and D. Murray. Parallel tracking and mapping for small AR workspaces. In Mixed and Augmented Reality (ISMAR), 2007. 1, 2, 7, 8 [9] G. Klein and D. Murray. Improving the agility of keyframebased SLAM. In ECCV, 2008. 1 [10] M. Pollefes et al. Detailed real-time urban 3d reconstruction from video. IJCV, 78(2-3): 143–167, 2008. 2, 3 [11] L. Matthies, R. Szeliski, and T. Kanade. Incremental estimation of dense depth maps from image image sequences. In CVPR, 1988. 2 [12] R. Newcombe, S. Lovegrove, and A. Davison. DTAM: Dense tracking and mapping in real-time. In ICCV, 2011. 1, 2 [13] M. Okutomi and T. Kanade. A multiple-baseline stereo. Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 15(4):353–363, 1993. 2, 3 [14] T. Sato, M. Kanbara, N. Yokoya, and H. Takemura. Dense 3-d reconstruction of an outdoor scene by hundreds-baseline stereo using a hand-held camera. IJCV, 47: 1–3, 2002. 2 [15] J. Stuehmer, S. Gumhold, and D. Cremers. Real-time dense geometry from a handheld camera. In Pattern Recognition (DAGM), 2010. 1, 2 [16] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A benchmark for the evaluation of RGB-D SLAM systems. In Intelligent Robot Systems (IROS), 2012. 2, 7 [17] A. Wendel, M. Maurer, G. Graber, T. Pock, and H. Bischof. Dense reconstruction on-the-fly. In ECCV, 2012. 1 11445566

3 0.77899265 341 iccv-2013-Real-Time Body Tracking with One Depth Camera and Inertial Sensors

Author: Thomas Helten, Meinard Müller, Hans-Peter Seidel, Christian Theobalt

Abstract: In recent years, the availability of inexpensive depth cameras, such as the Microsoft Kinect, has boosted the research in monocular full body skeletal pose tracking. Unfortunately, existing trackers often fail to capture poses where a single camera provides insufficient data, such as non-frontal poses, and all other poses with body part occlusions. In this paper, we present a novel sensor fusion approach for real-time full body tracking that succeeds in such difficult situations. It takes inspiration from previous tracking solutions, and combines a generative tracker and a discriminative tracker retrieving closest poses in a database. In contrast to previous work, both trackers employ data from a low number of inexpensive body-worn inertial sensors. These sensors provide reliable and complementary information when the monocular depth information alone is not sufficient. We also contribute by new algorithmic solutions to best fuse depth and inertial data in both trackers. One is a new visibility model to determine global body pose, occlusions and usable depth correspondences and to decide what data modality to use for discriminative tracking. We also contribute with a new inertial-basedpose retrieval, and an adapted late fusion step to calculate the final body pose.

4 0.76975095 209 iccv-2013-Image Guided Depth Upsampling Using Anisotropic Total Generalized Variation

Author: David Ferstl, Christian Reinbacher, Rene Ranftl, Matthias Ruether, Horst Bischof

Abstract: In this work we present a novel method for the challenging problem of depth image upsampling. Modern depth cameras such as Kinect or Time of Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we formulate a convex optimization problem using higher order regularization for depth image upsampling. In this optimization an anisotropic diffusion tensor, calculated from a high resolution intensity image, is used to guide the upsampling. We derive a numerical algorithm based on a primaldual formulation that is efficiently parallelized and runs at multiple frames per second. We show that this novel upsampling clearly outperforms state of the art approaches in terms of speed and accuracy on the widely used Middlebury 2007 datasets. Furthermore, we introduce novel datasets with highly accurate groundtruth, which, for the first time, enable to benchmark depth upsampling methods using real sensor data.

5 0.75727439 108 iccv-2013-Depth from Combining Defocus and Correspondence Using Light-Field Cameras

Author: Michael W. Tao, Sunil Hadap, Jitendra Malik, Ravi Ramamoorthi

Abstract: Light-field cameras have recently become available to the consumer market. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one ’s viewpoint within the subapertures of the main lens, effectively obtaining multiple views. Thus, depth cues from both defocus and correspondence are available simultaneously in a single capture. Previously, defocus could be achieved only through multiple image exposures focused at different depths, while correspondence cues needed multiple exposures at different viewpoints or multiple cameras; moreover, both cues could not easily be obtained together. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining both defocus and correspondence depth cues. We analyze the x-u 2D epipolar image (EPI), where by convention we assume the spatial x coordinate is horizontal and the angular u coordinate is vertical (our final algorithm uses the full 4D EPI). We show that defocus depth cues are obtained by computing the horizontal (spatial) variance after vertical (angular) integration, and correspondence depth cues by computing the vertical (angular) variance. We then show how to combine the two cues into a high quality depth map, suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction.

6 0.74655712 133 iccv-2013-Efficient Hand Pose Estimation from a Single Depth Image

7 0.74336433 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera

8 0.73970515 367 iccv-2013-SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels

9 0.70075405 366 iccv-2013-STAR3D: Simultaneous Tracking and Reconstruction of 3D Objects Using RGB-D Data

10 0.68741566 218 iccv-2013-Interactive Markerless Articulated Hand Motion Tracking Using RGB and Depth Data

11 0.68236578 18 iccv-2013-A Joint Intensity and Depth Co-sparse Analysis Model for Depth Map Super-resolution

12 0.66847456 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects

13 0.65204245 228 iccv-2013-Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences

14 0.62956595 424 iccv-2013-Tracking Revisited Using RGBD Camera: Unified Benchmark and Baselines

15 0.57908553 56 iccv-2013-Automatic Registration of RGB-D Scans via Salient Directions

16 0.57904965 402 iccv-2013-Street View Motion-from-Structure-from-Motion

17 0.57529479 28 iccv-2013-A Rotational Stereo Model Based on XSlit Imaging

18 0.57268143 139 iccv-2013-Elastic Fragments for Dense Scene Reconstruction

19 0.56611246 444 iccv-2013-Viewing Real-World Faces in 3D

20 0.55009192 278 iccv-2013-Multi-scale Topological Features for Hand Posture Representation and Analysis


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.069), (7, 0.018), (12, 0.029), (13, 0.014), (26, 0.076), (31, 0.038), (40, 0.022), (42, 0.116), (64, 0.051), (73, 0.05), (89, 0.196), (95, 0.01), (96, 0.207), (98, 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.90138042 173 iccv-2013-Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging

Author: Hae-Gon Jeon, Joon-Young Lee, Yudeog Han, Seon Joo Kim, In So Kweon

Abstract: Finding a good binary sequence is critical in determining theperformance ofthe coded exposure imaging, butprevious methods mostly rely on a random search for finding the binary codes, which could easily fail to find good long sequences due to the exponentially growing search space. In this paper, we present a new computationally efficient algorithm for generating the binary sequence, which is especially well suited for longer sequences. We show that the concept of the low autocorrelation binary sequence that has been well exploited in the information theory community can be applied for generating the fluttering patterns of the shutter, propose a new measure of a good binary sequence, and present a new algorithm by modifying the Legendre sequence for the coded exposure imaging. Experiments using both synthetic and real data show that our new algorithm consistently generates better binary sequencesfor the coded exposure problem, yielding better deblurring and resolution enhancement results compared to the previous methods for generating the binary codes.

2 0.87862623 22 iccv-2013-A New Adaptive Segmental Matching Measure for Human Activity Recognition

Author: Shahriar Shariat, Vladimir Pavlovic

Abstract: The problem of human activity recognition is a central problem in many real-world applications. In this paper we propose a fast and effective segmental alignmentbased method that is able to classify activities and interactions in complex environments. We empirically show that such model is able to recover the alignment that leads to improved similarity measures within sequence classes and hence, raises the classification performance. We also apply a bounding technique on the histogram distances to reduce the computation of the otherwise exhaustive search.

same-paper 3 0.85648912 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones

Author: Petri Tanskanen, Kalin Kolev, Lorenz Meier, Federico Camposeco, Olivier Saurer, Marc Pollefeys

Abstract: unkown-abstract

4 0.83400255 93 iccv-2013-Correlation Adaptive Subspace Segmentation by Trace Lasso

Author: Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan

Abstract: This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using ?1-minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, Low-RankRepresentation (LRR), by rank minimization, and Least Squares Regression (LSR), by ?2-regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS.

5 0.82327658 187 iccv-2013-Group Norm for Learning Structured SVMs with Unstructured Latent Variables

Author: Daozheng Chen, Dhruv Batra, William T. Freeman

Abstract: Latent variables models have been applied to a number of computer vision problems. However, the complexity of the latent space is typically left as a free design choice. A larger latent space results in a more expressive model, but such models are prone to overfitting and are slower to perform inference with. The goal of this paper is to regularize the complexity of the latent space and learn which hidden states are really relevant for prediction. Specifically, we propose using group-sparsity-inducing regularizers such as ?1-?2 to estimate the parameters of Structured SVMs with unstructured latent variables. Our experiments on digit recognition and object detection show that our approach is indeed able to control the complexity of latent space without any significant loss in accuracy of the learnt model.

6 0.78619391 438 iccv-2013-Unsupervised Visual Domain Adaptation Using Subspace Alignment

7 0.77633595 338 iccv-2013-Randomized Ensemble Tracking

8 0.77611196 349 iccv-2013-Regionlets for Generic Object Detection

9 0.77438855 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning

10 0.77384561 150 iccv-2013-Exemplar Cut

11 0.7721771 121 iccv-2013-Discriminatively Trained Templates for 3D Object Detection: A Real Time Scalable Approach

12 0.77212274 379 iccv-2013-Semantic Segmentation without Annotating Segments

13 0.77037799 196 iccv-2013-Hierarchical Data-Driven Descent for Efficient Optimal Deformation Estimation

14 0.7702862 57 iccv-2013-BOLD Features to Detect Texture-less Objects

15 0.77024508 328 iccv-2013-Probabilistic Elastic Part Model for Unsupervised Face Detector Adaptation

16 0.76986468 188 iccv-2013-Group Sparsity and Geometry Constrained Dictionary Learning for Action Recognition from Depth Maps

17 0.76966918 285 iccv-2013-NEIL: Extracting Visual Knowledge from Web Data

18 0.76856554 61 iccv-2013-Beyond Hard Negative Mining: Efficient Detector Learning via Block-Circulant Decomposition

19 0.76846141 327 iccv-2013-Predicting an Object Location Using a Global Image Representation

20 0.76782048 95 iccv-2013-Cosegmentation and Cosketch by Unsupervised Learning