iccv iccv2013 iccv2013-363 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Olivier Saurer, Kevin Köser, Jean-Yves Bouguet, Marc Pollefeys
Abstract: A huge fraction of cameras used nowadays is based on CMOS sensors with a rolling shutter that exposes the image line by line. For dynamic scenes/cameras this introduces undesired effects like stretch, shear and wobble. It has been shown earlier that rotational shake induced rolling shutter effects in hand-held cell phone capture can be compensated based on an estimate of the camera rotation. In contrast, we analyse the case of significant camera motion, e.g. where a bypassing streetlevel capture vehicle uses a rolling shutter camera in a 3D reconstruction framework. The introduced error is depth dependent and cannot be compensated based on camera motion/rotation alone, invalidating also rectification for stereo camera systems. On top, significant lens distortion as often present in wide angle cameras intertwines with rolling shutter effects as it changes the time at which a certain 3D point is seen. We show that naive 3D reconstructions (assuming global shutter) will deliver biased geometry already for very mild assumptions on vehicle speed and resolution. We then develop rolling shutter dense multiview stereo algorithms that solve for time of exposure and depth at the same time, even in the presence of lens distortion and perform an evaluation on ground truth laser scan models as well as on real street-level data.
Reference: text
sentIndex sentText sentNum sentScore
1 de Abstract A huge fraction of cameras used nowadays is based on CMOS sensors with a rolling shutter that exposes the image line by line. [sent-4, score-1.388]
2 It has been shown earlier that rotational shake induced rolling shutter effects in hand-held cell phone capture can be compensated based on an estimate of the camera rotation. [sent-6, score-1.56]
3 where a bypassing streetlevel capture vehicle uses a rolling shutter camera in a 3D reconstruction framework. [sent-9, score-1.47]
4 The introduced error is depth dependent and cannot be compensated based on camera motion/rotation alone, invalidating also rectification for stereo camera systems. [sent-10, score-0.346]
5 On top, significant lens distortion as often present in wide angle cameras intertwines with rolling shutter effects as it changes the time at which a certain 3D point is seen. [sent-11, score-1.648]
6 We then develop rolling shutter dense multiview stereo algorithms that solve for time of exposure and depth at the same time, even in the presence of lens distortion and perform an evaluation on ground truth laser scan models as well as on real street-level data. [sent-13, score-1.75]
7 Consequently, the analysis of rolling shutter cameras came ∗This work was done while the author was employed by ETH Z ¨urich. [sent-20, score-1.369]
8 ch front is vertical but due to fast horizontal camera motion during exposure appears to be slanted (red line). [sent-25, score-0.257]
9 into focus, where exposure of columns (scanlines) happens in sequential order leading to undesired distortion effects when the camera is not fixed during exposure. [sent-27, score-0.337]
10 It has been shown recently that for hand-held smartphone cameras in static scenes, most of the rolling shutter effects can be compensated in the image (without 3D scene information), that is by compensating rotation [6, 20, 12, 1]. [sent-28, score-1.449]
11 1), making a simple 2D image warp into a global shutter image impossible. [sent-31, score-0.775]
12 In this paper we analyze the rolling shutter stereo problem and develop fast multi-view stereo algorithms that produce accurate 3D models from rolling shutter cameras. [sent-35, score-2.788]
13 465 As real cameras often have lens distortion, and in particular those wide angle cameras often used for capturing streetlevel data, we also consider lens distortion, which we show makes the problem much more complex. [sent-36, score-0.363]
14 To the best of our knowledge, no previous work exists on dense depth estimation with rolling shutter cameras and the common setting of lens distortion in a rolling shutter setting has not been analyzed. [sent-37, score-2.904]
15 Practical discussion of fast-motion induced rolling shutter effects: Traditional stereo produces biased 3D results for standard streetlevel capture geometries 2. [sent-39, score-1.473]
16 Analysis of interplay between rolling shutter and lens distortion: Correct undistortion requires 3D scene information. [sent-40, score-1.466]
17 Planar rolling shutter warp as a generalization of the plane induced homography 4. [sent-42, score-1.464]
18 Multi-view stereo algorithm for rolling shutter cameras (with or without lens distortion) In section 2 we will review previous work on rolling shutter cameras. [sent-43, score-2.856]
19 We will then recapitulate the rolling shutter model and analyze fast motion and lens distortion effects in section 3. [sent-44, score-1.585]
20 In section 4 we develop a warp for mapping a point of one rolling shutter image into another rolling shutter image, assuming a planar 3D scene. [sent-45, score-2.683]
21 Previous work The chip-level architecture of a CMOS sensor and the reasons for the rolling shutter effect are described by Liang et al. [sent-50, score-1.299]
22 [17] who also propose an optic-flow-like method to compensate rolling shutter effects for in-plane motion. [sent-51, score-1.363]
23 had analyzed the effect of a rolling shutter camera, in particular for special camera motions and geometries (e. [sent-53, score-1.418]
24 fronto-parallel, no forward components) and had suggested a scheme how to calibrate the shutter timings. [sent-55, score-0.687]
25 They showed that in a very special setting a rolling shutter sensor behaves as a x-slits camera [22]. [sent-56, score-1.372]
26 Rolling shutter cameras are also related to pushbroom camera models [11] often used for satellite images (actually a special case [22] of the x-slits cameras), however for those, under straight motion, backprojected planes are parallel, while for rolling shutter cameras this does not hold. [sent-59, score-2.313]
27 Recently, several approaches for image stabilization for rolling shutter cameras have been proposed. [sent-60, score-1.383]
28 [3] use stroboscope lighting and subframe warping to synchronize multiple rolling shutter cameras and to compensate the sequential exposure effects. [sent-62, score-1.488]
29 [10] exploit local flow vectors to compensate rolling shutter for uncalibrated cameras, but using a mixture ofhomographies. [sent-66, score-1.316]
30 applied structure from motion algorithms to tackle the problem for static scenes: First, Forssen and Ringaby [6, 20] had tracked features through cell phone video sequences and compensated cell phone rotation, which they identified as the dominant source of distortion for hand-held videos. [sent-71, score-0.335]
31 [16] proposed a structure from motion pipeline, for cameras mounted on a car, which uses relative pose prior along the vehicle path. [sent-75, score-0.176]
32 Our approach can be seen as the next step of a 3D reconstruction pipeline from rolling shutter cameras. [sent-76, score-1.327]
33 Given camera motion and orientation (from bundle adjustment and/or sensors), our goal is to densely estimate the 3D scene geometry from rolling shutter cameras. [sent-77, score-1.432]
34 on a capture vehicle, or in a cell phone close to an object) introducing a depth-dependent rolling shutter effect. [sent-80, score-1.377]
35 [21] or [15] for non pinhole cameras), we show how to solve depth estimation, lens undistortion and rolling shutter compensation at the same time. [sent-84, score-1.528]
36 Rolling Shutter Camera Model Lets look at the case where a 3D point X is observed by a global shutter pinhole camera P, i. [sent-89, score-0.797]
37 In case of linear camera motion and constant orientation, the point moves on a straight line in the image, its position depending on the time τ: xτ ? [sent-92, score-0.169]
38 PτX = (R0 | t0 + τt) X (1) Now, we will move to the rolling shutter camera model, assuming that image column (scanline) r is exposed at time τ = mr b. [sent-93, score-1.398]
39 The projection of the point describes a straight line or a more complicated curve (depending on degree of distortion parameters). [sent-96, score-0.186]
40 The distorted camera cannot be undistorted without depth information for the 3D point, as the time of exposure τ depends on the depth. [sent-97, score-0.247]
41 Then we look at the scanline (horizontal coordinate) that must match the time of exposure τ scanlineX,P(τ) = (c1τ c2)/(c5τ + c6) τ (3) Essentially, we are looking for the fixpoint of scanline(. [sent-104, score-0.176]
42 Many lenses, in particular wide angle lenses, show a significant amount of distortion and in the following we will briefly re-derive a standard radial/tangential distortion model that dates back to Brown [4]: + =! [sent-110, score-0.202]
43 nomial degree when (not) considering distortion for obtaining the τ in a rolling shutter camera (see also [9]). [sent-121, score-1.493]
44 τ described by a point in the image even under straight camera motion becomes more complicated and the degree of Eq. [sent-125, score-0.189]
45 Note that when the lens distortion of such a rolling shutter image is compensated (classical global shutter like inversion of Eq. [sent-129, score-2.233]
46 4), it means that straight lines in space will also become straight in the image, but that the shape of an original CMOS sensor scanline (those pixels that were exposed jointly at the same time) will become a more complicated curve rather than an image column. [sent-130, score-0.182]
47 For short term motions (during exposure time of one image, which is usually a fraction of a second) of rolling shutter cameras the linear motion with no rotation (car driving straight) and the orbital motion (cell phone filming a handheld object) are the most important cases. [sent-133, score-1.667]
48 Rolling Shutter Observability Rolling shutter needs to be considered only when its effects are significant, i. [sent-138, score-0.734]
49 In the following we concretize the assessment of [9] with practical numbers and considerations to allow for a decision of whether a rolling shutter model makes sense for a particular capture configuration. [sent-141, score-1.299]
50 Reconstruction of a square building facade observed by a rolling shutter camera moving parallel to the facade from left to right, with different simulated rolling shutter directions. [sent-143, score-2.709]
51 (a) shows rolling shutter direction relative to motion direction, resulting in some image (b) and later in another image (c). [sent-144, score-1.361]
52 Column (d) shows the reconstruction when assuming the same pose for all pixels (classical global shutter model), where one can see horizontal stretch, horizontal compression or slant in the 3D model. [sent-145, score-0.765]
53 Finally, column (e) shows the reconstruction using our formulation, taking into account the rolling shutter effect, which matches the grounds truth 3D model. [sent-146, score-1.327]
54 The focal length can be computed as f = 2w/tan( Looking at some point on the optical axis (0 0 in camera coordinates), it will be projected to the principal point (in a global shutter camera model). [sent-160, score-0.887]
55 Besides those, the direction of the rolling shutter plays an important role. [sent-171, score-1.316]
56 For camera planes parallel with the facades and rolling shutter orthogonal to the motion direction, a shearing effect will be visible in each image (maybe less visually pleasant when displayed as raw image) and such images do not align well with Manhattan structures in the scene. [sent-172, score-1.452]
57 On the other hand, when the rolling shutter is parallel with the motion, the image will be shrunk or stretched in that direction and for certain driving speeds undersampling issues may appear. [sent-173, score-1.404]
58 The resulting images and qualitative effects when observing a plane with different shutter directions can be seen in Fig. [sent-174, score-0.801]
59 Rolling Shutter Warp Across a Plane To warp a point from one rolling shutter camera to another, we first backproject it to a plane Π and then project it into the other image. [sent-178, score-1.524]
60 RS Backprojection of pixel onto space plane Π: Given some pixel position p ∈ P2 in a rolling shutter image, from its scanline we know immediately the time of exposure τp and consequently the corresponding projection matrix Pτp . [sent-179, score-1.595]
61 RS Projection of plane point into other view: The time of exposure of a certain 3D point can be computed according to Eq. [sent-184, score-0.203]
62 The degree of the polynomial depends on the lens distortion and the motion model as can be seen in Tab. [sent-189, score-0.284]
63 using only the first radial distortion coefficient and one of the important mo- tion models, this can be solved in closed form for the time of exposure τq in the other image. [sent-193, score-0.217]
64 In the rare case that more than one solution fulfills this, the same 3D point is seen multiple times in the same image (remember the rolling shutter creates a multi-perspective image when the camera moves). [sent-195, score-1.389]
65 We suggest - and later evaluate - two approximate strategies, that promise a speedup at the cost of some accuracy: Fast approximation 1(FA1): global shutter lens undistortion An efficient approximation is to perform the expensive lens undistortion globally, and then solve for the quadratic Eq. [sent-203, score-1.056]
66 This is in particular use- ful when the lens distortion is minor, because then the time of exposure does not change much with or without distortion. [sent-205, score-0.296]
67 In this case the undistortion can be precomputed offline using a lookup table (as standard for global shutter undistortion) and has to be done only once per image (if warps across multiple planes are run as in plane sweeping there is no need to run it per plane). [sent-206, score-0.93]
68 Integration to Plane Sweep Stereo For global shutter cameras, it has been shown that the ability of the graphics processing unit to handle smooth warps [21] can be exploited for real-time stereo approaches. [sent-216, score-0.824]
69 Evaluation We evaluated the proposed rolling shutter stereo algorithm on both synthetic and real datasets that mimic a single rolling shutter camera mounted on a moving car (allowing motion stereo to be computed from consecutive images). [sent-227, score-2.951]
70 Implementation Details: For the evaluations in this pa- × per we stick to a simple plane sweep model with a single plane normal (e. [sent-229, score-0.188]
71 The planes are sampled approximately linearly in image space, such that a warp over two neighbouring planes results in a pixel displacement of maximum one pixel distance. [sent-233, score-0.188]
72 A warp over a plane is computed by first undistorting a pixel and intersecting the ray passing through the pixel with the plane being considered. [sent-234, score-0.252]
73 Finally, for each pixel a geometric verification step is performed once the depth map for the next reference view has been computed: Each pixel of depth map one is backprojected into space, projected into the other image and compared to the depth estimated for that position. [sent-239, score-0.221]
74 Ground Truth Evaluation Rolling Shutter Direction: First, we qualitatively ana- × lyze the effects, when ignoring rolling shutter in stereo al2http://cvg. [sent-244, score-1.394]
75 3 we texture a square plane in 3D space and synthetically generate two rolling shutter images. [sent-247, score-1.38]
76 ) depths maps for global shutter are dense; This means that obtaining visually plausible results when applying a global shutter model to rolling shutter data does not mean the data is actually correct. [sent-252, score-2.713]
77 From each of those we pick one column and compose a novel image out of these scanlines to simulate the rolling shutter effect (afterwards, the GPU’s z-buffer is handled in the same way to obtain a rolling shutter depth map). [sent-256, score-2.675]
78 We evaluated rolling shutter stereo against global shutter stereo while the camera undergoes a motion of 0m, 0. [sent-259, score-2.314]
79 This corresponds to maximum driving speeds of 65km/h (castle) and 24km/h (old town) for exposure time of approximately 1/14s (as in our real system). [sent-269, score-0.175]
80 4 it can be seen that the global shutter algorithm performs worse with increasing rolling shutter effect while the rolling shutter algorithm is approximately constant. [sent-272, score-3.305]
81 1m depth consistency check, confirming again that the errors when using the global shutter model are significant but hard to detect. [sent-275, score-0.749]
82 3D error visualization for rolling shutter image pairs of two ground truth scenes (top and bottom): For no motion during exposure (Δx = 0, that is global shutter) all algorithms produce the same results. [sent-279, score-1.466]
83 For the global shutter, the error is generally higher and produces a systematic offset depending on depth and also distance to the distortion center. [sent-282, score-0.19]
84 Left: Median 3D error (boxes indicate median absolute deviation) when using a global or a rolling shutter algorithm. [sent-424, score-1.319]
85 6) we construct the rolling shutter images in a way that the motions during exposure of image 1 is in a different local direction than the one of image 2. [sent-427, score-1.432]
86 This happens when using different rolling shutter cameras on a car which are looking speed / warp [ms]median [m]MAD [m]fill rate × F RA S 21 2 70. [sent-428, score-1.474]
87 We apply GS and RS stereo on both camera streams independently and fuse the resulting models into a single coordinate frame. [sent-446, score-0.168]
88 In this case there is no longer just a systematic bias in the data, but global shutter stereo just cannot find the correct correspondence any more. [sent-450, score-0.829]
89 The exposure time of the camera was 72ms (approximately 1/14 s). [sent-454, score-0.175]
90 5m displacement during exposure and a baseline of 2m between frames, driving speed was 25km/h. [sent-456, score-0.184]
91 We compare GS and RS stereo on the Oake street sequence and observe that the facade is reconstructed further away from the camera in the GS case, compared to the RS case. [sent-461, score-0.207]
92 Conclusion We have analyzed the setting of camera motion induced rolling shutter effects and have shown that already for very moderate speeds and resolutions, effects are significant. [sent-465, score-1.568]
93 In particular, although global shutter algorithms seem to work out well (resulting in a dense, smooth depth map) the results are actually not correct. [sent-466, score-0.749]
94 We then generalized the homography transfer across a plane known for global shutter cameras to the setting of rolling shutter, considering also lens distortion that intertwines with the rolling shutter. [sent-467, score-2.299]
95 Based on this building block, a plane sweep approach has been im471 RS imageGS depthFA1 depthFA2 depthRS depth Figure 6. [sent-468, score-0.163]
96 Global shutter (GS) stereo fails as the correct correspondence is not in the search range. [sent-470, score-0.782]
97 Rolling shutter (RS) stereo provides throughout consistent depth. [sent-473, score-0.782]
98 This allows to decide for speed or precision in case the rule of thumb presented indicates a rolling shutter model should be used for the setting at hand. [sent-478, score-1.335]
99 Digital video stabilization and rolling shutter correction using gyroscopes. [sent-485, score-1.313]
100 Synchronization and rolling shutter compensation for consumer video camera arrays. [sent-500, score-1.392]
wordName wordTfidf (topN-words)
[('shutter', 0.687), ('rolling', 0.612), ('exposure', 0.102), ('distortion', 0.101), ('stereo', 0.095), ('lens', 0.093), ('undistortion', 0.074), ('scanline', 0.074), ('camera', 0.073), ('cameras', 0.07), ('warp', 0.068), ('plane', 0.067), ('gs', 0.066), ('sweep', 0.054), ('driving', 0.048), ('effects', 0.047), ('motion', 0.045), ('phone', 0.044), ('town', 0.043), ('rs', 0.042), ('depth', 0.042), ('castle', 0.039), ('ringaby', 0.037), ('streetlevel', 0.037), ('planes', 0.035), ('scanlines', 0.035), ('straight', 0.034), ('cell', 0.034), ('compensated', 0.033), ('vehicle', 0.033), ('forss', 0.033), ('cmos', 0.031), ('rectification', 0.03), ('mounted', 0.028), ('reconstruction', 0.028), ('systematic', 0.027), ('feldman', 0.026), ('hedborg', 0.026), ('backprojected', 0.026), ('exposed', 0.026), ('speeds', 0.025), ('polynomial', 0.025), ('sweeping', 0.025), ('old', 0.023), ('warps', 0.022), ('phones', 0.022), ('slanted', 0.022), ('hanning', 0.021), ('intertwines', 0.021), ('karpenko', 0.021), ('compensation', 0.02), ('speed', 0.02), ('street', 0.02), ('global', 0.02), ('degree', 0.02), ('facade', 0.019), ('nowadays', 0.019), ('fillmore', 0.019), ('pushbroom', 0.019), ('pixel', 0.018), ('view', 0.018), ('analyzed', 0.018), ('laser', 0.018), ('forssen', 0.017), ('geomar', 0.017), ('klingner', 0.017), ('resolution', 0.017), ('grid', 0.017), ('direction', 0.017), ('compensate', 0.017), ('point', 0.017), ('car', 0.017), ('consequently', 0.017), ('undistorted', 0.016), ('thumb', 0.016), ('rotational', 0.016), ('homography', 0.016), ('mordohai', 0.016), ('speedup', 0.015), ('bundle', 0.015), ('lenses', 0.015), ('stretched', 0.015), ('reference', 0.015), ('horizontal', 0.015), ('displacement', 0.014), ('distorted', 0.014), ('undesired', 0.014), ('gallup', 0.014), ('stabilization', 0.014), ('stretch', 0.014), ('induced', 0.014), ('curve', 0.014), ('texture', 0.014), ('ray', 0.014), ('gpu', 0.014), ('motions', 0.014), ('geometries', 0.014), ('biased', 0.014), ('radial', 0.014), ('epipolar', 0.014)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 363 iccv-2013-Rolling Shutter Stereo
Author: Olivier Saurer, Kevin Köser, Jean-Yves Bouguet, Marc Pollefeys
Abstract: A huge fraction of cameras used nowadays is based on CMOS sensors with a rolling shutter that exposes the image line by line. For dynamic scenes/cameras this introduces undesired effects like stretch, shear and wobble. It has been shown earlier that rotational shake induced rolling shutter effects in hand-held cell phone capture can be compensated based on an estimate of the camera rotation. In contrast, we analyse the case of significant camera motion, e.g. where a bypassing streetlevel capture vehicle uses a rolling shutter camera in a 3D reconstruction framework. The introduced error is depth dependent and cannot be compensated based on camera motion/rotation alone, invalidating also rectification for stereo camera systems. On top, significant lens distortion as often present in wide angle cameras intertwines with rolling shutter effects as it changes the time at which a certain 3D point is seen. We show that naive 3D reconstructions (assuming global shutter) will deliver biased geometry already for very mild assumptions on vehicle speed and resolution. We then develop rolling shutter dense multiview stereo algorithms that solve for time of exposure and depth at the same time, even in the presence of lens distortion and perform an evaluation on ground truth laser scan models as well as on real street-level data.
2 0.7003215 32 iccv-2013-A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration
Author: Maxime Meilland, Tom Drummond, Andrew I. Comport
Abstract: Motion blur and rolling shutter deformations both inhibit visual motion registration, whether it be due to a moving sensor or a moving target. Whilst both deformations exist simultaneously, no models have been proposed to handle them together. Furthermore, neither deformation has been consideredpreviously in the context of monocularfullimage 6 degrees of freedom registration or RGB-D structure and motion. As will be shown, rolling shutter deformation is observed when a camera moves faster than a single pixel in parallax between subsequent scan-lines. Blur is a function of the pixel exposure time and the motion vector. In this paper a complete dense 3D registration model will be derived to accountfor both motion blur and rolling shutter deformations simultaneously. Various approaches will be compared with respect to ground truth and live real-time performance will be demonstratedfor complex scenarios where both blur and shutter deformations are dominant.
3 0.44547546 402 iccv-2013-Street View Motion-from-Structure-from-Motion
Author: Bryan Klingner, David Martin, James Roseborough
Abstract: We describe a structure-from-motion framework that handles “generalized” cameras, such as moving rollingshutter cameras, and works at an unprecedented scale— billions of images covering millions of linear kilometers of roads—by exploiting a good relative pose prior along vehicle paths. We exhibit a planet-scale, appearanceaugmented point cloud constructed with our framework and demonstrate its practical use in correcting the pose of a street-level image collection.
4 0.11574343 173 iccv-2013-Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging
Author: Hae-Gon Jeon, Joon-Young Lee, Yudeog Han, Seon Joo Kim, In So Kweon
Abstract: Finding a good binary sequence is critical in determining theperformance ofthe coded exposure imaging, butprevious methods mostly rely on a random search for finding the binary codes, which could easily fail to find good long sequences due to the exponentially growing search space. In this paper, we present a new computationally efficient algorithm for generating the binary sequence, which is especially well suited for longer sequences. We show that the concept of the low autocorrelation binary sequence that has been well exploited in the information theory community can be applied for generating the fluttering patterns of the shutter, propose a new measure of a good binary sequence, and present a new algorithm by modifying the Legendre sequence for the coded exposure imaging. Experiments using both synthetic and real data show that our new algorithm consistently generates better binary sequencesfor the coded exposure problem, yielding better deblurring and resolution enhancement results compared to the previous methods for generating the binary codes.
5 0.097668134 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video
Author: Feng Liu, Yuzhen Niu, Hailin Jin
Abstract: Shaky stereoscopic video is not only unpleasant to watch but may also cause 3D fatigue. Stabilizing the left and right view of a stereoscopic video separately using a monocular stabilization method tends to both introduce undesirable vertical disparities and damage horizontal disparities, which may destroy the stereoscopic viewing experience. In this paper, we present a joint subspace stabilization method for stereoscopic video. We prove that the low-rank subspace constraint for monocular video [10] also holds for stereoscopic video. Particularly, the feature trajectories from the left and right video share the same subspace. Based on this proof, we develop a stereo subspace stabilization method that jointly computes a common subspace from the left and right video and uses it to stabilize the two videos simultaneously. Our method meets the stereoscopic constraints without 3D reconstruction or explicit left-right correspondence. We test our method on a variety of stereoscopic videos with different scene content and camera motion. The experiments show that our method achieves high-quality stabilization for stereoscopic video in a robust and efficient way.
6 0.076372176 317 iccv-2013-Piecewise Rigid Scene Flow
7 0.074857093 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera
8 0.072194777 280 iccv-2013-Multi-view 3D Reconstruction from Uncalibrated Radially-Symmetric Cameras
9 0.071833901 28 iccv-2013-A Rotational Stereo Model Based on XSlit Imaging
10 0.071199164 228 iccv-2013-Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences
11 0.065828986 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction
12 0.063184343 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera
13 0.060790129 342 iccv-2013-Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length
14 0.060040735 436 iccv-2013-Unsupervised Intrinsic Calibration from a Single Frame Using a "Plumb-Line" Approach
15 0.05673838 255 iccv-2013-Local Signal Equalization for Correspondence Matching
16 0.056648955 304 iccv-2013-PM-Huber: PatchMatch with Huber Regularization for Stereo Matching
17 0.055877458 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces
18 0.055815067 174 iccv-2013-Forward Motion Deblurring
19 0.05446282 108 iccv-2013-Depth from Combining Defocus and Correspondence Using Light-Field Cameras
20 0.05407821 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones
topicId topicWeight
[(0, 0.102), (1, -0.163), (2, -0.032), (3, 0.064), (4, -0.023), (5, 0.019), (6, 0.032), (7, -0.14), (8, 0.043), (9, 0.059), (10, 0.025), (11, -0.081), (12, -0.008), (13, -0.101), (14, -0.063), (15, 0.062), (16, 0.073), (17, 0.051), (18, -0.03), (19, 0.049), (20, -0.054), (21, -0.203), (22, -0.175), (23, 0.171), (24, -0.082), (25, -0.244), (26, -0.245), (27, -0.079), (28, 0.259), (29, 0.212), (30, -0.118), (31, -0.279), (32, 0.123), (33, 0.272), (34, -0.013), (35, 0.139), (36, -0.005), (37, -0.059), (38, -0.132), (39, -0.179), (40, 0.048), (41, -0.023), (42, 0.029), (43, -0.048), (44, 0.046), (45, 0.131), (46, -0.082), (47, -0.079), (48, -0.038), (49, -0.041)]
simIndex simValue paperId paperTitle
same-paper 1 0.9536345 363 iccv-2013-Rolling Shutter Stereo
Author: Olivier Saurer, Kevin Köser, Jean-Yves Bouguet, Marc Pollefeys
Abstract: A huge fraction of cameras used nowadays is based on CMOS sensors with a rolling shutter that exposes the image line by line. For dynamic scenes/cameras this introduces undesired effects like stretch, shear and wobble. It has been shown earlier that rotational shake induced rolling shutter effects in hand-held cell phone capture can be compensated based on an estimate of the camera rotation. In contrast, we analyse the case of significant camera motion, e.g. where a bypassing streetlevel capture vehicle uses a rolling shutter camera in a 3D reconstruction framework. The introduced error is depth dependent and cannot be compensated based on camera motion/rotation alone, invalidating also rectification for stereo camera systems. On top, significant lens distortion as often present in wide angle cameras intertwines with rolling shutter effects as it changes the time at which a certain 3D point is seen. We show that naive 3D reconstructions (assuming global shutter) will deliver biased geometry already for very mild assumptions on vehicle speed and resolution. We then develop rolling shutter dense multiview stereo algorithms that solve for time of exposure and depth at the same time, even in the presence of lens distortion and perform an evaluation on ground truth laser scan models as well as on real street-level data.
2 0.85107839 32 iccv-2013-A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration
Author: Maxime Meilland, Tom Drummond, Andrew I. Comport
Abstract: Motion blur and rolling shutter deformations both inhibit visual motion registration, whether it be due to a moving sensor or a moving target. Whilst both deformations exist simultaneously, no models have been proposed to handle them together. Furthermore, neither deformation has been consideredpreviously in the context of monocularfullimage 6 degrees of freedom registration or RGB-D structure and motion. As will be shown, rolling shutter deformation is observed when a camera moves faster than a single pixel in parallax between subsequent scan-lines. Blur is a function of the pixel exposure time and the motion vector. In this paper a complete dense 3D registration model will be derived to accountfor both motion blur and rolling shutter deformations simultaneously. Various approaches will be compared with respect to ground truth and live real-time performance will be demonstratedfor complex scenarios where both blur and shutter deformations are dominant.
3 0.70026928 402 iccv-2013-Street View Motion-from-Structure-from-Motion
Author: Bryan Klingner, David Martin, James Roseborough
Abstract: We describe a structure-from-motion framework that handles “generalized” cameras, such as moving rollingshutter cameras, and works at an unprecedented scale— billions of images covering millions of linear kilometers of roads—by exploiting a good relative pose prior along vehicle paths. We exhibit a planet-scale, appearanceaugmented point cloud constructed with our framework and demonstrate its practical use in correcting the pose of a street-level image collection.
4 0.46271789 173 iccv-2013-Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging
Author: Hae-Gon Jeon, Joon-Young Lee, Yudeog Han, Seon Joo Kim, In So Kweon
Abstract: Finding a good binary sequence is critical in determining theperformance ofthe coded exposure imaging, butprevious methods mostly rely on a random search for finding the binary codes, which could easily fail to find good long sequences due to the exponentially growing search space. In this paper, we present a new computationally efficient algorithm for generating the binary sequence, which is especially well suited for longer sequences. We show that the concept of the low autocorrelation binary sequence that has been well exploited in the information theory community can be applied for generating the fluttering patterns of the shutter, propose a new measure of a good binary sequence, and present a new algorithm by modifying the Legendre sequence for the coded exposure imaging. Experiments using both synthetic and real data show that our new algorithm consistently generates better binary sequencesfor the coded exposure problem, yielding better deblurring and resolution enhancement results compared to the previous methods for generating the binary codes.
5 0.39646566 164 iccv-2013-Fibonacci Exposure Bracketing for High Dynamic Range Imaging
Author: Mohit Gupta, Daisuke Iso, Shree K. Nayar
Abstract: Exposure bracketing for high dynamic range (HDR) imaging involves capturing several images of the scene at different exposures. If either the camera or the scene moves during capture, the captured images must be registered. Large exposure differences between bracketed images lead to inaccurate registration, resulting in artifacts such as ghosting (multiple copies of scene objects) and blur. We present two techniques, one for image capture (Fibonacci exposure bracketing) and one for image registration (generalized registration), to prevent such motion-related artifacts. Fibonacci bracketing involves capturing a sequence of images such that each exposure time is the sum of the previous N(N > 1) exposures. Generalized registration involves estimating motion between sums of contiguous sets of frames, instead of between individual frames. Together, the two techniques ensure that motion is always estimated betweenframes of the same total exposure time. This results in HDR images and videos which have both a large dynamic range andminimal motion-relatedartifacts. We show, by results for several real-world indoor and outdoor scenes, that theproposed approach significantly outperforms several ex- isting bracketing schemes.
6 0.30442894 348 iccv-2013-Refractive Structure-from-Motion on Underwater Images
7 0.282451 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video
8 0.234146 280 iccv-2013-Multi-view 3D Reconstruction from Uncalibrated Radially-Symmetric Cameras
9 0.22897965 254 iccv-2013-Live Metric 3D Reconstruction on Mobile Phones
10 0.22325298 49 iccv-2013-An Enhanced Structure-from-Motion Paradigm Based on the Absolute Dual Quadric and Images of Circular Points
11 0.21977894 28 iccv-2013-A Rotational Stereo Model Based on XSlit Imaging
12 0.20757414 228 iccv-2013-Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences
13 0.20115428 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction
14 0.20089561 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera
15 0.19592094 139 iccv-2013-Elastic Fragments for Dense Scene Reconstruction
16 0.19162095 436 iccv-2013-Unsupervised Intrinsic Calibration from a Single Frame Using a "Plumb-Line" Approach
17 0.17739516 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes
18 0.17376858 17 iccv-2013-A Global Linear Method for Camera Pose Registration
19 0.17232348 255 iccv-2013-Local Signal Equalization for Correspondence Matching
20 0.17128688 286 iccv-2013-NYC3DCars: A Dataset of 3D Vehicles in Geographic Context
topicId topicWeight
[(2, 0.044), (7, 0.029), (26, 0.064), (31, 0.033), (38, 0.062), (40, 0.011), (42, 0.076), (55, 0.168), (64, 0.026), (73, 0.033), (89, 0.234), (98, 0.064)]
simIndex simValue paperId paperTitle
same-paper 1 0.90306485 363 iccv-2013-Rolling Shutter Stereo
Author: Olivier Saurer, Kevin Köser, Jean-Yves Bouguet, Marc Pollefeys
Abstract: A huge fraction of cameras used nowadays is based on CMOS sensors with a rolling shutter that exposes the image line by line. For dynamic scenes/cameras this introduces undesired effects like stretch, shear and wobble. It has been shown earlier that rotational shake induced rolling shutter effects in hand-held cell phone capture can be compensated based on an estimate of the camera rotation. In contrast, we analyse the case of significant camera motion, e.g. where a bypassing streetlevel capture vehicle uses a rolling shutter camera in a 3D reconstruction framework. The introduced error is depth dependent and cannot be compensated based on camera motion/rotation alone, invalidating also rectification for stereo camera systems. On top, significant lens distortion as often present in wide angle cameras intertwines with rolling shutter effects as it changes the time at which a certain 3D point is seen. We show that naive 3D reconstructions (assuming global shutter) will deliver biased geometry already for very mild assumptions on vehicle speed and resolution. We then develop rolling shutter dense multiview stereo algorithms that solve for time of exposure and depth at the same time, even in the presence of lens distortion and perform an evaluation on ground truth laser scan models as well as on real street-level data.
2 0.9004491 32 iccv-2013-A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration
Author: Maxime Meilland, Tom Drummond, Andrew I. Comport
Abstract: Motion blur and rolling shutter deformations both inhibit visual motion registration, whether it be due to a moving sensor or a moving target. Whilst both deformations exist simultaneously, no models have been proposed to handle them together. Furthermore, neither deformation has been consideredpreviously in the context of monocularfullimage 6 degrees of freedom registration or RGB-D structure and motion. As will be shown, rolling shutter deformation is observed when a camera moves faster than a single pixel in parallax between subsequent scan-lines. Blur is a function of the pixel exposure time and the motion vector. In this paper a complete dense 3D registration model will be derived to accountfor both motion blur and rolling shutter deformations simultaneously. Various approaches will be compared with respect to ground truth and live real-time performance will be demonstratedfor complex scenarios where both blur and shutter deformations are dominant.
3 0.89642107 183 iccv-2013-Geometric Registration Based on Distortion Estimation
Author: Wei Zeng, Mayank Goswami, Feng Luo, Xianfeng Gu
Abstract: Surface registration plays a fundamental role in many applications in computer vision and aims at finding a oneto-one correspondence between surfaces. Conformal mapping based surface registration methods conformally map 2D/3D surfaces onto 2D canonical domains and perform the matching on the 2D plane. This registration framework reduces dimensionality, and the result is intrinsic to Riemannian metric and invariant under isometric deformation. However, conformal mapping will be affected by inconsistent boundaries and non-isometric deformations of surfaces. In this work, we quantify the effects of boundary variation and non-isometric deformation to conformal mappings, and give the theoretical upper bounds for the distortions of conformal mappings under these two factors. Besides giving the thorough theoretical proofs of the theorems, we verified them by concrete experiments using 3D human facial scans with dynamic expressions and varying boundaries. Furthermore, we used the distortion estimates for reducing search range in feature matching of surface registration applications. The experimental results are consistent with the theoreticalpredictions and also demonstrate the performance improvements in feature tracking.
4 0.88999474 224 iccv-2013-Joint Optimization for Consistent Multiple Graph Matching
Author: Junchi Yan, Yu Tian, Hongyuan Zha, Xiaokang Yang, Ya Zhang, Stephen M. Chu
Abstract: The problem of graph matching in general is NP-hard and approaches have been proposed for its suboptimal solution, most focusing on finding the one-to-one node mapping between two graphs. A more general and challenging problem arises when one aims to find consistent mappings across a number of graphs more than two. Conventional graph pair matching methods often result in mapping inconsistency since the mapping between two graphs can either be determined by pair mapping or by an additional anchor graph. To address this issue, a novel formulation is derived which is maximized via alternating optimization. Our method enjoys several advantages: 1) the mappings are jointly optimized rather than sequentially performed by applying pair matching, allowing the global affinity information across graphs can be propagated and explored; 2) the number of concerned variables to optimize is in linear with the number of graphs, being superior to local pair matching resulting in O(n2) variables; 3) the mapping consistency constraints are analytically satisfied during optimization; and 4) off-the-shelf graph pair matching solvers can be reused under the proposed framework in an ‘out-of-thebox’ fashion. Competitive results on both the synthesized data and the real data are reported, by varying the level of deformation, outliers and edge densities. ∗Corresponding author. The work is supported by NSF IIS1116886, NSF IIS-1049694, NSFC 61129001/F010403 and the 111 Project (B07022). Yu Tian Shanghai Jiao Tong University Shanghai, China, 200240 yut ian @ s j tu . edu .cn Xiaokang Yang Shanghai Jiao Tong University Shanghai, China, 200240 xkyang@ s j tu .edu . cn Stephen M. Chu IBM T.J. Waston Research Center Yorktown Heights, NY USA, 10598 s chu @u s . ibm . com
5 0.88632911 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video
Author: Feng Liu, Yuzhen Niu, Hailin Jin
Abstract: Shaky stereoscopic video is not only unpleasant to watch but may also cause 3D fatigue. Stabilizing the left and right view of a stereoscopic video separately using a monocular stabilization method tends to both introduce undesirable vertical disparities and damage horizontal disparities, which may destroy the stereoscopic viewing experience. In this paper, we present a joint subspace stabilization method for stereoscopic video. We prove that the low-rank subspace constraint for monocular video [10] also holds for stereoscopic video. Particularly, the feature trajectories from the left and right video share the same subspace. Based on this proof, we develop a stereo subspace stabilization method that jointly computes a common subspace from the left and right video and uses it to stabilize the two videos simultaneously. Our method meets the stereoscopic constraints without 3D reconstruction or explicit left-right correspondence. We test our method on a variety of stereoscopic videos with different scene content and camera motion. The experiments show that our method achieves high-quality stabilization for stereoscopic video in a robust and efficient way.
6 0.86187088 144 iccv-2013-Estimating the 3D Layout of Indoor Scenes and Its Clutter from Depth Sensors
7 0.85706407 23 iccv-2013-A New Image Quality Metric for Image Auto-denoising
8 0.84884113 406 iccv-2013-Style-Aware Mid-level Representation for Discovering Visual Connections in Space and Time
9 0.83833921 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error
10 0.83153552 267 iccv-2013-Model Recommendation with Virtual Probes for Egocentric Hand Detection
11 0.8315112 402 iccv-2013-Street View Motion-from-Structure-from-Motion
12 0.82529759 33 iccv-2013-A Unified Video Segmentation Benchmark: Annotation, Metrics and Analysis
13 0.81844765 1 iccv-2013-3DNN: Viewpoint Invariant 3D Geometry Matching for Scene Understanding
14 0.8168366 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction
15 0.8164438 437 iccv-2013-Unsupervised Random Forest Manifold Alignment for Lipreading
16 0.81086439 444 iccv-2013-Viewing Real-World Faces in 3D
17 0.81084573 434 iccv-2013-Unifying Nuclear Norm and Bilinear Factorization Approaches for Low-Rank Matrix Decomposition
18 0.81030709 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling
19 0.80959487 108 iccv-2013-Depth from Combining Defocus and Correspondence Using Light-Field Cameras
20 0.80945301 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces