iccv iccv2013 iccv2013-348 knowledge-graph by maker-knowledge-mining

348 iccv-2013-Refractive Structure-from-Motion on Underwater Images


Source: pdf

Author: Anne Jordt-Sedlazeck, Reinhard Koch

Abstract: In underwater environments, cameras need to be confined in an underwater housing, viewing the scene through a piece of glass. In case of flat port underwater housings, light rays entering the camera housing are refracted twice, due to different medium densities of water, glass, and air. This causes the usually linear rays of light to bend and the commonly used pinhole camera model to be invalid. When using the pinhole camera model without explicitly modeling refraction in Structure-from-Motion (SfM) methods, a systematic model error occurs. Therefore, in this paper, we propose a system for computing camera path and 3D points with explicit incorporation of refraction using new methods for pose estimation. Additionally, a new error function is introduced for non-linear optimization, especially bundle adjustment. The proposed method allows to increase reconstruction accuracy and is evaluated in a set of experiments, where the proposed method’s performance is compared to SfM with the perspective camera model.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Refractive Structure-from-Motion on Underwater Images Anne Jordt-Sedlazeck and Reinhard Koch Institute of Compute Science, Kiel University, Germany { s edl a z e ck ,rk} @mip . [sent-1, score-0.029]

2 de Abstract In underwater environments, cameras need to be confined in an underwater housing, viewing the scene through a piece of glass. [sent-4, score-0.896]

3 In case of flat port underwater housings, light rays entering the camera housing are refracted twice, due to different medium densities of water, glass, and air. [sent-5, score-1.347]

4 This causes the usually linear rays of light to bend and the commonly used pinhole camera model to be invalid. [sent-6, score-0.481]

5 When using the pinhole camera model without explicitly modeling refraction in Structure-from-Motion (SfM) methods, a systematic model error occurs. [sent-7, score-0.644]

6 Therefore, in this paper, we propose a system for computing camera path and 3D points with explicit incorporation of refraction using new methods for pose estimation. [sent-8, score-0.543]

7 Additionally, a new error function is introduced for non-linear optimization, especially bundle adjustment. [sent-9, score-0.107]

8 The proposed method allows to increase reconstruction accuracy and is evaluated in a set of experiments, where the proposed method’s performance is compared to SfM with the perspective camera model. [sent-10, score-0.267]

9 Introduction In the last decade, many applications for images captured underwater arose. [sent-12, score-0.386]

10 They include scientific exploration of geological or archaeological structures on the sea floor [2], maintenance of offshore oil rigs, inspection of ship hulls, and measurements of ships and other fisheries [6]. [sent-13, score-0.153]

11 Due to the need of gaining measurements in the above described scenarios, the geometry ofimage formation is often utilized. [sent-14, score-0.024]

12 However, cameras used in an underwater environment are usually confined in an underwater housing filled with air, viewing the scene through a piece of glass. [sent-15, score-1.174]

13 In case of this glass being a flat port, the light rays entering the camera housing are refracted twice, once at the water-glass interface and again at the glass-air interface. [sent-16, score-1.354]

14 Many of the above described applications require the camera to be lowered into the deep sea, sometimes to water depths ofthousands ofmeters. [sent-17, score-0.446]

15 Therefore, the underwater housing needs to be strong enough to withstand immense water pressures, requiring the glass interface to be several centimeters thick. [sent-18, score-1.505]

16 The double 57 refraction causes the usually straight rays of light to bend and change direction depending on the interface incidence angles. [sent-19, score-0.857]

17 When following the ray in water in Figure 1 without refraction (dashed line), it does not intersect the camera center. [sent-20, score-1.016]

18 [28] showed that the perspective camera model is invalid below water due to the rays not intersecting in one common center of projection. [sent-22, score-0.778]

19 Despite that, the perspective camera model is often used for underwater images, approximating the refractive effect to some extent. [sent-23, score-0.898]

20 [18] showed that a camera calibrated below water approximates refraction with focal length and radial distortion and Sedlazeck and Koch [25] showed that principal point and camera pose absorb some of this model error in addition to focal length and radial distortion. [sent-25, score-1.442]

21 Due to the perspective model being invalid, a systematic model error is introduced, when applying perspective algorithms utilizing imaging geometry like mosaicing or Structure-from-Motion (SfM) [9, 27] to underwater images. [sent-26, score-0.665]

22 Even though, several works can be found in the literature, where the perspective camera model is used to reconstruct 3D scenes in underwater environments (e. [sent-27, score-0.653]

23 In contrast to using the perspective camera model in order to approximate refraction, refraction can also be modeled explicitly, where first a parametrization of the glass port of the housing needs to be found and calibrated. [sent-30, score-1.346]

24 [19] coming from the area of photogrammetry where the housing of a stereo rig can be calibrated. [sent-32, score-0.323]

25 [28] assumes a flat port interface with very thin glass and parallelism between glass and imaging sensor. [sent-34, score-1.082]

26 [1] showed how a more general camera with thick glass and a possible inclination angle between glass interface and imaging sensor can be calibrated, and Jordt-Sedlazeck et al. [sent-36, score-1.074]

27 Building upon a valid calibration of an underwater camera, meaning the intrinsics and a housing parametrization are known, several approaches to refractive SfM exist. [sent-38, score-0.986]

28 [4] proposed a method for refractive SfM, where the camera views a scene at the bottom of a pool through the water surface and the camera’s yaw and pitch with respect to the water surface are assumed to be known. [sent-42, score-0.958]

29 [16] showed results for 3D reconstruction with relative pose between two images with explicit incorporation of refraction. [sent-44, score-0.074]

30 They rely on outlierfree correspondences, which have to be selected manually and glass thickness is not modeled explicitly. [sent-45, score-0.329]

31 The system cannot handle image sequences and because of the use of the reprojection error during bundle adjustment, it cannot be extended easily. [sent-46, score-0.107]

32 Our Contribution: In this paper, we propose a more general method for refractive SfM that can evaluate video sequences with more general patterns of movement compared to [4]. [sent-47, score-0.245]

33 The main problem to overcome is that due to refraction, the computation of the refractive re-projection error is infeasible in large non-linear optimization problems like bundle adjustment [29]. [sent-48, score-0.414]

34 Therefore we propose a new error function that can be computed efficiently and even enables the analytic derivation of the necessary Jacobian matrices of the error function. [sent-49, score-0.088]

35 Finally, a refractive plane sweep proposed in [13] is used to estimate dense depth maps for each view, which are then used to create the final 3D model. [sent-51, score-0.306]

36 Controlled experiments show that the proposed method performs better than a comparable perspective method, where the refractive effect is only approximated. [sent-52, score-0.333]

37 Refractive Camera Model and Non-linear Error Function The camera model is the standard pinhole camera model with distortion [9, 27]. [sent-54, score-0.461]

38 Hence the camera’s intrinsics are defined in the camera matrix K containing focal length f, aspect ratio a, and a principal point (cx , cy), complemented by two coefficients for radial distortion r1 and r2 and two coefficients for tangential distortion t1 and t2. [sent-55, score-0.498]

39 The camera’s extrinsics are the rotation matrix R and the translation vector C, resulting in the projection matrix P = K[RT| − RTC]. [sent-56, score-0.088]

40 Refraction at the underwater housing is described by Snell’s law [11] and depends on the different medias’ indexes of refraction na for air, ng for glass, and nw for water. [sent-58, score-1.048]

41 As seen in Figure 1, the rays coming from the water do not intersect in the camera’s center of projection. [sent-59, score-0.529]

42 However, [1] determined that a camera behind a flat port underwater housing is an axial camera, i. [sent-60, score-1.119]

43 all rays coming from the water intersect a common axis defined by the camera center and the interface normal (blue glass rg, and air ra. [sent-62, score-1.457]

44 All ray segments together with the interface normal lie in a common plane, the Plane of Refraction (POR). [sent-63, score-0.515]

45 The blue line depicts the interface normal passing through the center of projection which is intersected by all rays rw [1] (dashed line). [sent-64, score-0.721]

46 The virtual camera’s center Cv is located at the intersection of the un-refracted ray (dashed line) and the interface normal (blue) and its focal length is fv = d. [sent-66, score-0.819]

47 Moreover, all segments of the light ray ra in air, rg in glass, and rw in water, and the interface normal n lie in one common plane, the Plane of Refraction POR (pale blue plane in Fig. [sent-69, score-0.896]

48 In order to back-project a ray from a 2D image point, the ray in air ra is determined using the perspective parameters explained above. [sent-71, score-0.612]

49 Then, the ray direction in glass rg is computed by [1]: rg = × (1) nngara+? [sent-72, score-0.699]

50 Using rg, ng, and nw, the ray in water rw is computed respectively. [sent-76, score-0.594]

51 Along with the interface distance d and the interface thickness dg, ra and rg allow to determine a starting point p of the ray rw on the outer glass plane (Fig. [sent-77, score-1.429]

52 (2) Hence for each pixel, a raxel [8] can be computed using the proposed parameters, instead of calibrating each raxel independently, which is often difficult. [sent-79, score-0.13]

53 Using the proposed parameter set, [1] derived two constraints for the flat port underwater camera. [sent-80, score-0.612]

54 The first one is called the Flat Refractive Constraint (FRC) and states that if a 3D point X has been transformed into the local camera coordinate system, its direction should be the same as the ray in water rw, hence: (RTX − RTC − p) From the POR follows that: (RTX 58 − RTC)T(n rw = 0 (FRC). [sent-81, score-0.867]

55 Virtual Camera Error Function When projecting a 3D point into a camera confined in an underwater housing with explicit refraction computation, Agrawal et al. [sent-85, score-1.257]

56 [1] determined that a 12th degree polynomial needs to be solved. [sent-86, score-0.025]

57 While this insight allows solving the projection problem much more efficiently than previous approaches, where usually the projection was determined by an optimization using the back-projection function [17], it is still infeasible in classic SfM, especially bundle adjustment. [sent-87, score-0.182]

58 It builds upon the idea in [24], where a virtual camera is defined for each 2D point into which the corresponding 3D point can be projected perspectively (Fig. [sent-89, score-0.402]

59 Note that a similar idea has been expressed in [23], however, the proposed method is adapted to the refractive case and is exact for each pixel. [sent-91, score-0.245]

60 The virtual camera error is computed using the ray in water rw and its starting point p as described above to define a virtual perspective camera. [sent-92, score-1.218]

61 The virtual rotation Rv is defined through its rotation axis, which is the cross product between interface normal and optical axis and its rotation angle, which is the scalar product between interface normal and optical axis. [sent-94, score-0.93]

62 Thus, a 3D point X in the global coordinate system is first transformed into a point in the local camera coordinate system Xl and then into the virtual camera coordinate system Xv by: Xl = RTX − RTC (5) Xv = RvTXl − RvTCv. [sent-96, score-0.699]

63 (6) The starting point on the outer interface is also transformed into the virtual camera: pv = RvTp − RvTCv. [sent-97, score-0.532]

64 (7) The error is then computed from the 2D projections of Xv and pv onto the virtual image plane: gv=? [sent-98, score-0.233]

65 (8) gv can be used as a non-linear error function for optimization with different parametrizations. [sent-101, score-0.092]

66 For example considering one camera and a set of n 2D-3D correspondences, when only extrinsic parameters and 3D points are unknown, the known ray in water and the virtual camera center are used: ? [sent-102, score-0.994]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('underwater', 0.386), ('refraction', 0.329), ('glass', 0.286), ('housing', 0.278), ('water', 0.267), ('interface', 0.259), ('refractive', 0.245), ('ray', 0.189), ('camera', 0.179), ('port', 0.152), ('virtual', 0.144), ('rw', 0.138), ('rtc', 0.131), ('rays', 0.129), ('sfm', 0.115), ('rg', 0.112), ('rtx', 0.098), ('air', 0.093), ('perspective', 0.088), ('por', 0.081), ('flat', 0.074), ('normal', 0.067), ('raxel', 0.065), ('refracted', 0.065), ('focal', 0.064), ('bundle', 0.063), ('plane', 0.061), ('xv', 0.06), ('confined', 0.06), ('pinhole', 0.058), ('frc', 0.058), ('treibitz', 0.054), ('intersect', 0.052), ('agrawal', 0.052), ('gv', 0.048), ('coming', 0.045), ('distortion', 0.045), ('pv', 0.045), ('bend', 0.045), ('error', 0.044), ('axis', 0.044), ('thickness', 0.043), ('intrinsics', 0.043), ('dashed', 0.043), ('radial', 0.043), ('light', 0.042), ('entering', 0.042), ('sea', 0.041), ('cv', 0.041), ('piece', 0.04), ('invalid', 0.04), ('coordinate', 0.039), ('showed', 0.039), ('center', 0.036), ('incorporation', 0.035), ('parametrization', 0.034), ('systematic', 0.034), ('koch', 0.033), ('line', 0.032), ('infeasible', 0.032), ('projection', 0.031), ('fv', 0.031), ('nw', 0.031), ('rotation', 0.03), ('adjustment', 0.03), ('transformed', 0.03), ('xl', 0.029), ('length', 0.029), ('lavest', 0.029), ('perspectively', 0.029), ('withstand', 0.029), ('edl', 0.029), ('snell', 0.029), ('maintenance', 0.029), ('ships', 0.029), ('geological', 0.029), ('intersected', 0.029), ('outer', 0.029), ('ra', 0.028), ('causes', 0.028), ('twice', 0.027), ('gvi', 0.027), ('hulls', 0.027), ('extrinsics', 0.027), ('ship', 0.025), ('mosaicing', 0.025), ('kiel', 0.025), ('parallelism', 0.025), ('tangential', 0.025), ('informat', 0.025), ('inclination', 0.025), ('axial', 0.025), ('reinhard', 0.025), ('incidence', 0.025), ('determined', 0.025), ('point', 0.025), ('viewing', 0.024), ('ng', 0.024), ('calibrated', 0.024), ('gaining', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 348 iccv-2013-Refractive Structure-from-Motion on Underwater Images

Author: Anne Jordt-Sedlazeck, Reinhard Koch

Abstract: In underwater environments, cameras need to be confined in an underwater housing, viewing the scene through a piece of glass. In case of flat port underwater housings, light rays entering the camera housing are refracted twice, due to different medium densities of water, glass, and air. This causes the usually linear rays of light to bend and the commonly used pinhole camera model to be invalid. When using the pinhole camera model without explicitly modeling refraction in Structure-from-Motion (SfM) methods, a systematic model error occurs. Therefore, in this paper, we propose a system for computing camera path and 3D points with explicit incorporation of refraction using new methods for pose estimation. Additionally, a new error function is introduced for non-linear optimization, especially bundle adjustment. The proposed method allows to increase reconstruction accuracy and is evaluated in a set of experiments, where the proposed method’s performance is compared to SfM with the perspective camera model.

2 0.13277008 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces

Author: Bastien Jacquet, Christian Häne, Kevin Köser, Marc Pollefeys

Abstract: Although specular objects have gained interest in recent years, virtually no approaches exist for markerless reconstruction of reflective scenes in the wild. In this work, we present a practical approach to capturing normal maps in real-world scenes using video only. We focus on nearly planar surfaces such as windows, facades from glass or metal, or frames, screens and other indoor objects and show how normal maps of these can be obtained without the use of an artificial calibration object. Rather, we track the reflections of real-world straight lines, while moving with a hand-held or vehicle-mounted camera in front of the object. In contrast to error-prone local edge tracking, we obtain the reflections by a robust, global segmentation technique of an ortho-rectified 3D video cube that also naturally allows efficient user interaction. Then, at each point of the reflective surface, the resulting 2D-curve to 3D-line correspondence provides a novel quadratic constraint on the local surface normal. This allows to globally solve for the shape by integrability and smoothness constraints and easily supports the usage of multiple lines. We demonstrate the technique on several objects and facades.

3 0.1029467 402 iccv-2013-Street View Motion-from-Structure-from-Motion

Author: Bryan Klingner, David Martin, James Roseborough

Abstract: We describe a structure-from-motion framework that handles “generalized” cameras, such as moving rollingshutter cameras, and works at an unprecedented scale— billions of images covering millions of linear kilometers of roads—by exploiting a good relative pose prior along vehicle paths. We exhibit a planet-scale, appearanceaugmented point cloud constructed with our framework and demonstrate its practical use in correcting the pose of a street-level image collection.

4 0.10104837 367 iccv-2013-SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels

Author: Jianxiong Xiao, Andrew Owens, Antonio Torralba

Abstract: Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all avail– able at http://sun3d.cs.princeton.edu.

5 0.099461637 280 iccv-2013-Multi-view 3D Reconstruction from Uncalibrated Radially-Symmetric Cameras

Author: Jae-Hak Kim, Yuchao Dai, Hongdong Li, Xin Du, Jonghyuk Kim

Abstract: We present a new multi-view 3D Euclidean reconstruction method for arbitrary uncalibrated radially-symmetric cameras, which needs no calibration or any camera model parameters other than radial symmetry. It is built on the radial 1D camera model [25], a unified mathematical abstraction to different types of radially-symmetric cameras. We formulate the problem of multi-view reconstruction for radial 1D cameras as a matrix rank minimization problem. Efficient implementation based on alternating direction continuation is proposed to handle scalability issue for real-world applications. Our method applies to a wide range of omnidirectional cameras including both dioptric and catadioptric (central and non-central) cameras. Additionally, our method deals with complete and incomplete measurements under a unified framework elegantly. Experiments on both synthetic and real images from various types of cameras validate the superior performance of our new method, in terms of numerical accuracy and robustness.

6 0.08818648 342 iccv-2013-Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length

7 0.086030126 323 iccv-2013-Pose Estimation with Unknown Focal Length Using Points, Directions and Lines

8 0.084708199 17 iccv-2013-A Global Linear Method for Camera Pose Registration

9 0.078455374 351 iccv-2013-Restoring an Image Taken through a Window Covered with Dirt or Rain

10 0.076429389 152 iccv-2013-Extrinsic Camera Calibration without a Direct View Using Spherical Mirror

11 0.075756282 436 iccv-2013-Unsupervised Intrinsic Calibration from a Single Frame Using a "Plumb-Line" Approach

12 0.074556142 252 iccv-2013-Line Assisted Light Field Triangulation and Stereo Matching

13 0.074406408 27 iccv-2013-A Robust Analytical Solution to Isometric Shape-from-Template with Focal Length Calibration

14 0.074331343 262 iccv-2013-Matching Dry to Wet Materials

15 0.072720706 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects

16 0.066798598 28 iccv-2013-A Rotational Stereo Model Based on XSlit Imaging

17 0.065884717 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects

18 0.064438 111 iccv-2013-Detecting Dynamic Objects with Multi-view Background Subtraction

19 0.064430647 115 iccv-2013-Direct Optimization of Frame-to-Frame Rotation

20 0.064211354 286 iccv-2013-NYC3DCars: A Dataset of 3D Vehicles in Geographic Context


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.096), (1, -0.118), (2, -0.031), (3, 0.035), (4, -0.013), (5, 0.018), (6, 0.027), (7, -0.107), (8, 0.021), (9, 0.003), (10, 0.03), (11, -0.024), (12, -0.071), (13, 0.009), (14, 0.008), (15, 0.018), (16, 0.037), (17, 0.104), (18, -0.023), (19, 0.028), (20, -0.011), (21, -0.111), (22, -0.029), (23, 0.006), (24, -0.067), (25, 0.017), (26, -0.029), (27, -0.035), (28, -0.024), (29, 0.002), (30, -0.009), (31, 0.019), (32, 0.014), (33, -0.066), (34, -0.046), (35, 0.056), (36, -0.033), (37, -0.046), (38, -0.003), (39, -0.015), (40, -0.041), (41, -0.055), (42, -0.021), (43, -0.008), (44, 0.062), (45, -0.018), (46, 0.039), (47, -0.011), (48, 0.085), (49, 0.056)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95377797 348 iccv-2013-Refractive Structure-from-Motion on Underwater Images

Author: Anne Jordt-Sedlazeck, Reinhard Koch

Abstract: In underwater environments, cameras need to be confined in an underwater housing, viewing the scene through a piece of glass. In case of flat port underwater housings, light rays entering the camera housing are refracted twice, due to different medium densities of water, glass, and air. This causes the usually linear rays of light to bend and the commonly used pinhole camera model to be invalid. When using the pinhole camera model without explicitly modeling refraction in Structure-from-Motion (SfM) methods, a systematic model error occurs. Therefore, in this paper, we propose a system for computing camera path and 3D points with explicit incorporation of refraction using new methods for pose estimation. Additionally, a new error function is introduced for non-linear optimization, especially bundle adjustment. The proposed method allows to increase reconstruction accuracy and is evaluated in a set of experiments, where the proposed method’s performance is compared to SfM with the perspective camera model.

2 0.78114849 152 iccv-2013-Extrinsic Camera Calibration without a Direct View Using Spherical Mirror

Author: Amit Agrawal

Abstract: We consider the problem of estimating the extrinsic parameters (pose) of a camera with respect to a reference 3D object without a direct view. Since the camera does not view the object directly, previous approaches have utilized reflections in a planar mirror to solve this problem. However, a planar mirror based approach requires a minimum of three reflections and has degenerate configurations where estimation fails. In this paper, we show that the pose can be obtained using a single reflection in a spherical mirror of known radius. This makes our approach simpler and easier in practice. In addition, unlike planar mirrors, the spherical mirror based approach does not have any degenerate configurations, leading to a robust algorithm. While a planar mirror reflection results in a virtual perspective camera, a spherical mirror reflection results in a non-perspective axial camera. The axial nature of rays allows us to compute the axis (direction of sphere center) and few pose parameters in a linear fashion. We then derive an analytical solution to obtain the distance to the sphere cen- ter and remaining pose parameters and show that it corresponds to solving a 16th degree equation. We present comparisons with a recent method that use planar mirrors and show that our approach recovers more accurate pose in the presence of noise. Extensive simulations and results on real data validate our algorithm.

3 0.76129597 280 iccv-2013-Multi-view 3D Reconstruction from Uncalibrated Radially-Symmetric Cameras

Author: Jae-Hak Kim, Yuchao Dai, Hongdong Li, Xin Du, Jonghyuk Kim

Abstract: We present a new multi-view 3D Euclidean reconstruction method for arbitrary uncalibrated radially-symmetric cameras, which needs no calibration or any camera model parameters other than radial symmetry. It is built on the radial 1D camera model [25], a unified mathematical abstraction to different types of radially-symmetric cameras. We formulate the problem of multi-view reconstruction for radial 1D cameras as a matrix rank minimization problem. Efficient implementation based on alternating direction continuation is proposed to handle scalability issue for real-world applications. Our method applies to a wide range of omnidirectional cameras including both dioptric and catadioptric (central and non-central) cameras. Additionally, our method deals with complete and incomplete measurements under a unified framework elegantly. Experiments on both synthetic and real images from various types of cameras validate the superior performance of our new method, in terms of numerical accuracy and robustness.

4 0.75916213 49 iccv-2013-An Enhanced Structure-from-Motion Paradigm Based on the Absolute Dual Quadric and Images of Circular Points

Author: Lilian Calvet, Pierre Gurdjos

Abstract: This work aims at introducing a new unified Structurefrom-Motion (SfM) paradigm in which images of circular point-pairs can be combined with images of natural points. An imaged circular point-pair encodes the 2D Euclidean structure of a world plane and can easily be derived from the image of a planar shape, especially those including circles. A classical SfM method generally runs two steps: first a projective factorization of all matched image points (into projective cameras and points) and second a camera selfcalibration that updates the obtained world from projective to Euclidean. This work shows how to introduce images of circular points in these two SfM steps while its key contribution is to provide the theoretical foundations for combining “classical” linear self-calibration constraints with additional ones derived from such images. We show that the two proposed SfM steps clearly contribute to better results than the classical approach. We validate our contributions on synthetic and real images.

5 0.75682533 436 iccv-2013-Unsupervised Intrinsic Calibration from a Single Frame Using a "Plumb-Line" Approach

Author: R. Melo, M. Antunes, J.P. Barreto, G. Falcão, N. Gonçalves

Abstract: Estimating the amount and center ofdistortionfrom lines in the scene has been addressed in the literature by the socalled “plumb-line ” approach. In this paper we propose a new geometric method to estimate not only the distortion parameters but the entire camera calibration (up to an “angular” scale factor) using a minimum of 3 lines. We propose a new framework for the unsupervised simultaneous detection of natural image of lines and camera parameters estimation, enabling a robust calibration from a single image. Comparative experiments with existing automatic approaches for the distortion estimation and with ground truth data are presented.

6 0.74646437 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces

7 0.68216109 17 iccv-2013-A Global Linear Method for Camera Pose Registration

8 0.67723089 250 iccv-2013-Lifting 3D Manhattan Lines from a Single Image

9 0.65408278 346 iccv-2013-Rectangling Stereographic Projection for Wide-Angle Image Visualization

10 0.62564123 402 iccv-2013-Street View Motion-from-Structure-from-Motion

11 0.62281156 323 iccv-2013-Pose Estimation with Unknown Focal Length Using Points, Directions and Lines

12 0.59932095 397 iccv-2013-Space-Time Tradeoffs in Photo Sequencing

13 0.58233792 342 iccv-2013-Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length

14 0.54724294 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal

15 0.54360443 27 iccv-2013-A Robust Analytical Solution to Isometric Shape-from-Template with Focal Length Calibration

16 0.53307593 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects

17 0.51319063 353 iccv-2013-Revisiting the PnP Problem: A Fast, General and Optimal Solution

18 0.51221639 84 iccv-2013-Complex 3D General Object Reconstruction from Line Drawings

19 0.50850219 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera

20 0.49773568 184 iccv-2013-Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.024), (7, 0.019), (12, 0.013), (26, 0.51), (27, 0.017), (31, 0.018), (42, 0.07), (64, 0.016), (73, 0.016), (89, 0.131), (98, 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.93565756 405 iccv-2013-Structured Light in Sunlight

Author: Mohit Gupta, Qi Yin, Shree K. Nayar

Abstract: Strong ambient illumination severely degrades the performance of structured light based techniques. This is especially true in outdoor scenarios, where the structured light sources have to compete with sunlight, whose power is often 2-5 orders of magnitude larger than the projected light. In this paper, we propose the concept of light-concentration to overcome strong ambient illumination. Our key observation is that given a fixed light (power) budget, it is always better to allocate it sequentially in several portions of the scene, as compared to spreading it over the entire scene at once. For a desired level of accuracy, we show that by distributing light appropriately, the proposed approach requires 1-2 orders lower acquisition time than existing approaches. Our approach is illumination-adaptive as the optimal light distribution is determined based on a measurement of the ambient illumination level. Since current light sources have a fixed light distribution, we have built a prototype light source that supports flexible light distribution by controlling the scanning speed of a laser scanner. We show several high quality 3D scanning results in a wide range of outdoor scenarios. The proposed approach will benefit 3D vision systems that need to operate outdoors under extreme ambient illumination levels on a limited time and power budget.

2 0.89596659 51 iccv-2013-Anchored Neighborhood Regression for Fast Example-Based Super-Resolution

Author: Radu Timofte, Vincent De_Smet, Luc Van_Gool

Abstract: Recently there have been significant advances in image upscaling or image super-resolution based on a dictionary of low and high resolution exemplars. The running time of the methods is often ignored despite the fact that it is a critical factor for real applications. This paper proposes fast super-resolution methods while making no compromise on quality. First, we support the use of sparse learned dictionaries in combination with neighbor embedding methods. In this case, the nearest neighbors are computed using the correlation with the dictionary atoms rather than the Euclidean distance. Moreover, we show that most of the current approaches reach top performance for the right parameters. Second, we show that using global collaborative coding has considerable speed advantages, reducing the super-resolution mapping to a precomputed projective matrix. Third, we propose the anchored neighborhood regression. That is to anchor the neighborhood embedding of a low resolution patch to the nearest atom in the dictionary and to precompute the corresponding embedding matrix. These proposals are contrasted with current state-of- the-art methods on standard images. We obtain similar or improved quality and one or two orders of magnitude speed improvements.

3 0.89181632 395 iccv-2013-Slice Sampling Particle Belief Propagation

Author: Oliver Müller, Michael Ying Yang, Bodo Rosenhahn

Abstract: Inference in continuous label Markov random fields is a challenging task. We use particle belief propagation (PBP) for solving the inference problem in continuous label space. Sampling particles from the belief distribution is typically done by using Metropolis-Hastings (MH) Markov chain Monte Carlo (MCMC) methods which involves sampling from a proposal distribution. This proposal distribution has to be carefully designed depending on the particular model and input data to achieve fast convergence. We propose to avoid dependence on a proposal distribution by introducing a slice sampling based PBP algorithm. The proposed approach shows superior convergence performance on an image denoising toy example. Our findings are validated on a challenging relational 2D feature tracking application.

4 0.87903118 125 iccv-2013-Drosophila Embryo Stage Annotation Using Label Propagation

Author: Tomáš Kazmar, Evgeny Z. Kvon, Alexander Stark, Christoph H. Lampert

Abstract: In this work we propose a system for automatic classification of Drosophila embryos into developmental stages. While the system is designed to solve an actual problem in biological research, we believe that the principle underlying it is interesting not only for biologists, but also for researchers in computer vision. The main idea is to combine two orthogonal sources of information: one is a classifier trained on strongly invariant features, which makes it applicable to images of very different conditions, but also leads to rather noisy predictions. The other is a label propagation step based on a more powerful similarity measure that however is only consistent within specific subsets of the data at a time. In our biological setup, the information sources are the shape and the staining patterns of embryo images. We show experimentally that while neither of the methods can be used by itself to achieve satisfactory results, their combination achieves prediction quality comparable to human per- formance.

same-paper 5 0.86725265 348 iccv-2013-Refractive Structure-from-Motion on Underwater Images

Author: Anne Jordt-Sedlazeck, Reinhard Koch

Abstract: In underwater environments, cameras need to be confined in an underwater housing, viewing the scene through a piece of glass. In case of flat port underwater housings, light rays entering the camera housing are refracted twice, due to different medium densities of water, glass, and air. This causes the usually linear rays of light to bend and the commonly used pinhole camera model to be invalid. When using the pinhole camera model without explicitly modeling refraction in Structure-from-Motion (SfM) methods, a systematic model error occurs. Therefore, in this paper, we propose a system for computing camera path and 3D points with explicit incorporation of refraction using new methods for pose estimation. Additionally, a new error function is introduced for non-linear optimization, especially bundle adjustment. The proposed method allows to increase reconstruction accuracy and is evaluated in a set of experiments, where the proposed method’s performance is compared to SfM with the perspective camera model.

6 0.86239398 282 iccv-2013-Multi-view Object Segmentation in Space and Time

7 0.85789394 198 iccv-2013-Hierarchical Part Matching for Fine-Grained Visual Categorization

8 0.7935968 295 iccv-2013-On One-Shot Similarity Kernels: Explicit Feature Maps and Properties

9 0.76874566 102 iccv-2013-Data-Driven 3D Primitives for Single Image Understanding

10 0.76556921 8 iccv-2013-A Deformable Mixture Parsing Model with Parselets

11 0.67221671 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions

12 0.66679919 414 iccv-2013-Temporally Consistent Superpixels

13 0.64476788 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

14 0.62917191 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

15 0.62891555 150 iccv-2013-Exemplar Cut

16 0.621292 432 iccv-2013-Uncertainty-Driven Efficiently-Sampled Sparse Graphical Models for Concurrent Tumor Segmentation and Atlas Registration

17 0.61653525 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

18 0.61333829 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes

19 0.61011219 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

20 0.60335678 225 iccv-2013-Joint Segmentation and Pose Tracking of Human in Natural Videos