cvpr cvpr2013 cvpr2013-447 knowledge-graph by maker-knowledge-mining

447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation


Source: pdf

Author: Timothy Yau, Minglun Gong, Yee-Hong Yang

Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. [sent-4, score-0.688]

2 We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. [sent-5, score-0.974]

3 We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments. [sent-7, score-0.325]

4 One of the main difficulties for computer vision is refraction caused by the camera housing. [sent-10, score-0.718]

5 Recently there has been increasing interest in applying a physically correct model of refraction to improve the accuracy of stereo reconstructions [1, 5, 6, 11]. [sent-12, score-0.576]

6 Building upon these results, we further characterize the flat refraction camera model by studying the dispersion of light, which is the phenomenon where light refracts at a different angle depending on its wavelength. [sent-15, score-1.318]

7 While previous authors regarded dispersion as a minor problem to be ignored [9], we show that the disparate light paths can be exploited in a manner similar to triangulation, thereby increasing calibration accuracy. [sent-16, score-0.718]

8 In this paper we first show that dispersion provides additional constraints on the parameters of the refraction model, and explain how they can be used in calibration. [sent-17, score-0.974]

9 The refractive index of each layer varies with the wavelength of light, resulting in a different path for each wavelength. [sent-24, score-0.531]

10 All paths from a single object point lie on a common plane containing the refraction axis. [sent-25, score-0.676]

11 demonstrate how to perform the calibration in practice, including the construction of a calibration device using inexpensive parts. [sent-26, score-0.482]

12 We develop an original procedure to obtain ground truth values for real data, which is missing in previous works on underwater camera calibration. [sent-28, score-0.345]

13 analyzed the case of a single refraction at a flat air-water interface, and showed that such a camera system does not have a single viewpoint. [sent-31, score-0.841]

14 They developed a calibration procedure to find the unknown distance between the camera center and the interface, assuming that the interface is parallel to both the image plane and the checkerboard calibration pattern. [sent-32, score-0.792]

15 Sedlazeck and Koch developed a more flexible calibration method that does not require a calibration object, and also accounts for two refractions when the port of the camera housing is thick. [sent-34, score-0.751]

16 showed that the flat refraction camera model corresponds to an axial camera. [sent-42, score-0.818]

17 With this realization, they formulated a calibration framework that can handle multi-layer refraction models uni- formly through a set of linear constraints on the model parameters. [sent-43, score-0.794]

18 The resulting method still uses nonlinear optimization, but produces initial estimates efficiently by solving a set of linear systems while only assuming that the calibration object geometry is known [1]. [sent-44, score-0.351]

19 Instead, we examine the properties of the refraction camera model with respect to the dispersion of light, and find that by taking these properties into account, we can achieve greater calibration accuracy than if they were ignored. [sent-48, score-1.315]

20 Flat refraction model We first describe the flat refraction camera model and its parameters. [sent-50, score-1.394]

21 An example application which fits this model is a perspective camera placed in a watertight housing with a flat glass port. [sent-51, score-0.406]

22 Consider figure 1, which illustrates a pinhole perspective camera observing a scene through n ≥ 1parallel refraction layers. [sent-52, score-0.737]

23 , n risa dleelfi rneefrda by a thickness di and a refractive index μi,λ, which may depend on the wavelength λ oflight. [sent-56, score-0.483]

24 The distance between the camera and the first layer is given by d0, and the refraction axis A is the vector from the camera center that is perpendicular to all of the refraction layers. [sent-57, score-1.679]

25 show how refractive indices can also be estimated [1]). [sent-60, score-0.308]

26 Moreover, if the camera is placed in an underwater housing, the thickness of the port is often known. [sent-61, score-0.425]

27 Dispersion of light In most common substances, the index of refraction varies with the wavelength of the incident light. [sent-67, score-0.905]

28 ) A single ray of polychromatic light will be refracted into multiple rays depending on the wavelength components. [sent-69, score-0.411]

29 In the context of the flat refraction camera model, this implies that a single physical point will be imaged at multiple image points. [sent-70, score-0.845]

30 , distilled water at 19◦C has a refractive index that varies from 1. [sent-74, score-0.402]

31 In the flat refraction model discussed above, for one refraction (n = 1) the amount of dispersion can be characterized as the solution of a quartic equation. [sent-78, score-1.659]

32 Figure 2 (left) shows the plane of refraction for a ray passing through the refraction interface at (0, q) and reaching an object point (d1,p + q); without loss of generality we assume d0 = 1. [sent-79, score-1.387]

33 The refractive indices for this ray are μ0,a and μ1,a on the left and right sides of the interface respectively. [sent-80, score-0.418]

34 Consider another ray with different refractive indices μ0,b, μ1,b passing through the same object point but refracting at a different location (0, q + δ). [sent-81, score-0.433]

35 Figure 2 (right) shows how δ varies with q and d1 for the underwater calibration case. [sent-94, score-0.419]

36 “normal dispersion” where the refractive index increases as the light wavelength decreases. [sent-99, score-0.551]

37 This implies that there will be nonzero dispersion for all rays not perpendicular to the refraction layers. [sent-100, score-1.049]

38 Calibration In this section we develop constraint equations using dispersion for calibration. [sent-102, score-0.452]

39 is that the refraction axis A can be estimated first, which then allows the layer thicknesses to be found by solving a linear system of equations [1]. [sent-104, score-1.0]

40 By considering the dispersion of light, we obtain a new and effective constraint on A. [sent-105, score-0.416]

41 We show how to incorporate dispersion as a form of triangulation in estimating the layer × thicknesses, and also in a final nonlinear optimization step to refine the estimated parameters. [sent-106, score-0.649]

42 Let va, vb be unit vectors for the directions of two rays of different wavelengths a, b, and suppose they correspond to a single point in the scene. [sent-110, score-0.313]

43 Both rays must lie on the same plane of refraction containing the refraction axis A, which passes through the camera center. [sent-111, score-1.543]

44 Therefore, if the refractive indices differ for the two wavelengths such that va vb, we have the following dispersion constraint: = Dispersion : (va vb)? [sent-112, score-0.889]

45 uses a single linear system to solve for both the refraction axis and the calibration object pose [1], even though the two quantities are not inherently related. [sent-118, score-0.96]

46 Image and measurement noise ×× Since the effect of dispersion can be quite small, we added a preprocessing step in our implementation to reduce the impact of random noise. [sent-119, score-0.433]

47 In the absence of noise, these lines must all pass through a point u where the image plane and the refraction axis intersect. [sent-125, score-0.78]

48 The image points + {y, (y w)} are then back-projected into rays and used in t{hye, dispersion acorens thteranin bt. [sent-132, score-0.462]

49 Layer thicknesses Suppose that the geometry of the points P on the calibration object are known. [sent-137, score-0.413]

50 With appropriate estimates of the refraction axis A, object rotation R, and object translation t⊥ in the plane perpendicular to the axis, Agrawal et al. [sent-138, score-0.848]

51 showed that the thicknesses of the refraction layers can be formulated as a linear system (Eqn. [sent-139, score-0.741]

52 We follow their method, but incorporate a form of triangulation through the use of multiple wavelengths of light. [sent-141, score-0.298]

53 1, a single object point projects to multiple image points when dispersion is present. [sent-143, score-0.436]

54 More formally, if F is a function that projects a 3D point X to a point q on the refractive surface closest to the camera, then for two different wavelengths a and b we define the wavelength triangulation constraint: Triangulation : ? [sent-144, score-0.78]

55 This constraint is imposed sim- × ply by including all rays for each object point in the linear system for layer thicknesses (as well as in the linear system for recovering object pose, described below). [sent-149, score-0.405]

56 ’s solution for finding the refraction layer thicknesses, the required object pose parameters R and t⊥ are computed together with A through the “coplanarity constraint” (Eqn. [sent-152, score-0.694]

57 The coplanarity constraint states that each camera ray v and its corresponding object point P, after being transformed into camera coordinates, must lie on a plane of refraction containing the refraction axis: (RP + t)? [sent-156, score-1.659]

58 (10) Furthermore, for a planar calibration object such as a checkerboard where P(3) = 0, columns 7-9 of B are zero (Eqn. [sent-179, score-0.307]

59 Two further solutions are obtained by negating the signs of r8 and r7, corresponding to a reflection across the plane parallel to the refraction layers and passing through the object origin. [sent-193, score-0.653]

60 The correct solution is found after estimating the refraction layer thicknesses by choosing the one with the minimum reprojection error. [sent-194, score-0.826]

61 Nonlinear refinement The estimated refraction axis A, layer thicknesses di, as well as object pose parameters R and t are refined by a nonlinear optimization. [sent-197, score-1.062]

62 Therefore, the optimization implicitly tries to satisfy wavelength triangulation constraint as defined in Section 4. [sent-199, score-0.298]

63 Then we perform a 1D bisection search for the angle of a camera ray on that plane such that the back-projected ray intersects the given point. [sent-205, score-0.316]

64 Calibration object A standard checkerboard pattern does not suffice for our method because the dispersion effect is not readily apparent under normal lighting conditions. [sent-211, score-0.468]

65 We designed and built a new device that illuminates a perforated grid with two distinct wavelengths of light, forming a precisely known pat222555000200 the illuminated points viewed through an air-water interface (best × viewed in color; one pair of points marked for visibility). [sent-212, score-0.413]

66 The dispersion seen here is typical at around 6-7 pixels. [sent-213, score-0.379]

67 The top curve shows the average intensity of a blue pixel as a function of the intensity of the neighboring red pixels when the camera observes a 660nm light source; similarly for the bottom curve for red pixels. [sent-215, score-0.282]

68 We chose the two light wavelengths to be as far apart as possible to maximize the amount of dispersion, while remaining visible to typical cameras equipped with CCD or CMOS sensors and a Bayer-pattern color filter array (CFA). [sent-223, score-0.311]

69 Longer wavelengths in the infrared range suffer from high attenuation in water, while shorter wavelengths are not readily available for LEDs and may pose a hazard to the user. [sent-224, score-0.4]

70 Point localization Our calibration object provides point light sources emitting two wavelengths of light simultaneously. [sent-227, score-0.701]

71 Lens chromatic aberrations An important consideration with refraction-based lenses is that dispersion also occurs within the lens elements, the effects of which are known as chromatic aberrations (CA). [sent-237, score-0.55]

72 It is necessary to correct for CA in order to isolate the dispersive effect of the refraction planes in front of the camera. [sent-238, score-0.638]

73 We first capture images of our calibration object in air, and obtain two sets of camera intrinsic × parameters using the red and the blue channels separately. [sent-240, score-0.398]

74 Synthetic data For our synthetic data experiments we simulated a camera with a resolution of 4368 2912 pixels and a focal length ohf a4 r6e3s3o pixels, wfh 4i3ch68 8is × × ba 2s9e1d2 on xtheles i anntrdins ai cf parameters of the camera used in our real experiments. [sent-246, score-0.387]

75 We performed experiments for the refraction model configurations listed in Table 1. [sent-247, score-0.576]

76 All refractive indices are assumed known and are listed in table 2. [sent-248, score-0.307]

77 The angle between the refraction axis and the camera’s optical axis was set to 4. [sent-256, score-0.822]

78 The calibration pattern was a 27 29 planar grid of points emitting o bnot hp t4te05rnnm w aasnd a 26760n ×m 2 light. [sent-446, score-0.298]

79 For each noise level we generated 100 trials with the calibration pattern placed 440 units in front of the camera and rotated randomly by up to 20 degrees. [sent-451, score-0.421]

80 We include results before the nonlinear refinement step to show the effectiveness of the dispersion and wavelength triangulation constraints. [sent-453, score-0.722]

81 We believe that the refraction model is correct because the error goes to zero in the absence of noise, the refraction axis estimates appear reasonable, and the reprojection error is being minimized properly. [sent-785, score-1.372]

82 222555000422 × a checkerboard inside the tank, a checkerboard affixed to the tank surface, and the camera mounted on a SlyderDolly translation rail. [sent-788, score-0.499]

83 In light of our results, we believe that the additional constraints provided by dispersion and wavelength triangulation have a significant impact on calibration accuracy beyond simply doubling the number of feature points. [sent-789, score-1.032]

84 We collected two sets of data, one using our novel calibration device with a 27 ×29 grid of points, and a second using a 3 d4e v×i 3e5 w cihthec ak 2er7b×oa2r9d pattern pfooirn comparison cwonithd Agrawal e4t ×al. [sent-795, score-0.29]

85 ) The calibration objects were placed inside an acrylic water tank approximately 45cm behind the front surface and moved around slightly within the camera’s field of view. [sent-801, score-0.55]

86 Although our method gives accurate results, we noticed that the nonlinear refinement step did not improve the estimate of the refraction axis, which was very close to the ground truth to begin with (see Figure 7). [sent-807, score-0.658]

87 Since these observations disagree with our results using synthetic data, we attribute the source of error to measurement noise, lens distortions, and/or the pinhole camera approximation. [sent-810, score-0.276]

88 Interestingly, we found that the estimated d0 was particularly sensitive to variations in the refractive index difference λ2,405nm −λ2,660nm for water, but much less sensitive to variations in− bλoth index values that do not change this difference. [sent-812, score-0.331]

89 This is in line with the theory since the triangulation constraint is based on the difference in refraction angle. [sent-813, score-0.721]

90 Instead, we mounted our camera on a SlyderDolly translation rail, which allowed us to move the camera precisely in a straight line. [sent-820, score-0.337]

91 Translate the camera backwards until the tank surface is just in focus. [sent-826, score-0.327]

92 The refraction model parameters were then determined in two steps. [sent-833, score-0.595]

93 Firstly, since the camera orientation is not changed by linear motion, we averaged the rotation measurements to obtain the ground truth for the refraction axis. [sent-834, score-0.718]

94 The parameter d0 was then computed as the initial camera translation in the axis direction. [sent-837, score-0.291]

95 ‡Error of each measurement flat refraction layers. [sent-1020, score-0.709]

96 Our method exploits the effect of dispersion, which has previously been ignored, to develop constraints on the calibration parameters by using multiple wavelengths of light. [sent-1021, score-0.427]

97 Additionally, with appropriate scene illumination techniques, the wavelength triangulation constraint may be directly applicable in single-view 3D reconstruction or as a means to improve multi-view reconstruction quality. [sent-1025, score-0.298]

98 Another direction to investigate is calibration with unknown refractive indices, where the dispersion effect may aid in recovering these unknowns. [sent-1026, score-0.862]

99 Measurement of the refractive index of distilled water from the near-infrared region to the ultraviolet region. [sent-1049, score-0.38]

100 Perspective and non-perspective camera models in underwater imaging - overview and error analysis. [sent-1085, score-0.341]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('refraction', 0.576), ('dispersion', 0.379), ('refractive', 0.244), ('calibration', 0.218), ('agrawal', 0.205), ('wavelengths', 0.19), ('underwater', 0.179), ('wavelength', 0.153), ('camera', 0.142), ('thicknesses', 0.142), ('tank', 0.126), ('axis', 0.123), ('light', 0.121), ('triangulation', 0.108), ('flat', 0.1), ('checkerboard', 0.089), ('housing', 0.089), ('sedlazeck', 0.089), ('layer', 0.079), ('interface', 0.071), ('water', 0.067), ('nonlinear', 0.062), ('ray', 0.06), ('refractions', 0.055), ('plane', 0.054), ('rays', 0.053), ('thickness', 0.053), ('leds', 0.049), ('acrylic', 0.047), ('device', 0.046), ('vb', 0.043), ('indices', 0.043), ('perpendicular', 0.041), ('front', 0.039), ('constraint', 0.037), ('synthetic', 0.037), ('equations', 0.036), ('alberta', 0.036), ('daimon', 0.036), ('distilled', 0.036), ('edmonton', 0.036), ('enclosure', 0.036), ('gedge', 0.036), ('refracting', 0.036), ('slyderdolly', 0.036), ('chromatic', 0.034), ('index', 0.033), ('measurement', 0.033), ('va', 0.033), ('chari', 0.032), ('doubling', 0.032), ('treibitz', 0.032), ('surface', 0.031), ('points', 0.03), ('koch', 0.029), ('canada', 0.029), ('aberrations', 0.029), ('port', 0.029), ('watertight', 0.029), ('reprojection', 0.029), ('estimates', 0.028), ('noteworthy', 0.028), ('quartic', 0.028), ('snell', 0.028), ('backwards', 0.028), ('mounted', 0.027), ('point', 0.027), ('coplanarity', 0.026), ('grid', 0.026), ('translation', 0.026), ('lens', 0.025), ('emitting', 0.024), ('refracted', 0.024), ('real', 0.024), ('glass', 0.024), ('geometry', 0.023), ('passing', 0.023), ('simulated', 0.023), ('isolate', 0.023), ('system', 0.023), ('materials', 0.022), ('ab', 0.022), ('configuration', 0.022), ('placed', 0.022), ('varies', 0.022), ('estimated', 0.021), ('impact', 0.021), ('recovering', 0.021), ('lecture', 0.021), ('known', 0.02), ('refinement', 0.02), ('pose', 0.02), ('error', 0.02), ('gong', 0.02), ('pinhole', 0.019), ('transparent', 0.019), ('lie', 0.019), ('blue', 0.019), ('ca', 0.019), ('parameters', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999869 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

Author: Timothy Yau, Minglun Gong, Yee-Hong Yang

Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.

2 0.3317273 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello

Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.

3 0.2958971 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

4 0.27558887 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

Author: Amit Agrawal, Srikumar Ramalingam

Abstract: Imaging systems consisting of a camera looking at multiple spherical mirrors (reflection) or multiple refractive spheres (refraction) have been used for wide-angle imaging applications. We describe such setups as multi-axial imaging systems, since a single sphere results in an axial system. Assuming an internally calibrated camera, calibration of such multi-axial systems involves estimating the sphere radii and locations in the camera coordinate system. However, previous calibration approaches require manual intervention or constrained setups. We present a fully automatic approach using a single photo of a 2D calibration grid. The pose of the calibration grid is assumed to be unknown and is also recovered. Our approach can handle unconstrained setups, where the mirrors/refractive balls can be arranged in any fashion, not necessarily on a grid. The axial nature of rays allows us to compute the axis of each sphere separately. We then show that by choosing rays from two or more spheres, the unknown pose of the calibration grid can be obtained linearly and independently of sphere radii and locations. Knowing the pose, we derive analytical solutions for obtaining the sphere radius and location. This leads to an interesting result that 6-DOF pose estimation of a multi-axial camera can be done without the knowledge of full calibration. Simulations and real experiments demonstrate the applicability of our algorithm.

5 0.16268566 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

Author: Yu Ji, Jinwei Ye, Jingyi Yu

Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LFProbe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By . ude l. edu observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat’s Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.

6 0.15162888 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

7 0.13933113 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

8 0.12405393 409 cvpr-2013-Spectral Modeling and Relighting of Reflective-Fluorescent Scenes

9 0.12031008 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

10 0.11672396 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

11 0.10622587 368 cvpr-2013-Rolling Shutter Camera Calibration

12 0.098609127 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

13 0.088480234 423 cvpr-2013-Template-Based Isometric Deformable 3D Reconstruction with Sampling-Based Focal Length Self-Calibration

14 0.077715114 260 cvpr-2013-Learning and Calibrating Per-Location Classifiers for Visual Place Recognition

15 0.076002099 84 cvpr-2013-Cloud Motion as a Calibration Cue

16 0.070517324 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

17 0.070080966 124 cvpr-2013-Determining Motion Directly from Normal Flows Upon the Use of a Spherical Eye Platform

18 0.06807106 290 cvpr-2013-Motion Estimation for Self-Driving Cars with a Generalized Camera

19 0.063984692 344 cvpr-2013-Radial Distortion Self-Calibration

20 0.06299866 111 cvpr-2013-Dense Reconstruction Using 3D Object Shape Priors


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.124), (1, 0.171), (2, 0.002), (3, 0.04), (4, -0.01), (5, -0.078), (6, -0.068), (7, 0.003), (8, 0.058), (9, 0.044), (10, -0.076), (11, 0.088), (12, 0.145), (13, -0.103), (14, -0.225), (15, 0.057), (16, 0.11), (17, 0.127), (18, -0.031), (19, 0.093), (20, 0.14), (21, 0.015), (22, -0.107), (23, -0.107), (24, -0.032), (25, 0.073), (26, 0.035), (27, 0.047), (28, -0.007), (29, 0.003), (30, 0.047), (31, -0.081), (32, 0.1), (33, -0.029), (34, 0.081), (35, 0.01), (36, 0.017), (37, -0.03), (38, -0.011), (39, 0.009), (40, -0.074), (41, -0.009), (42, -0.03), (43, 0.052), (44, -0.012), (45, -0.001), (46, -0.047), (47, 0.088), (48, 0.001), (49, 0.002)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93732035 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

Author: Timothy Yau, Minglun Gong, Yee-Hong Yang

Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.

2 0.83983868 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

Author: Weiming Li, Haitao Wang, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Xing Mei, Tao Hong, Hoyoung Lee, Jiyeun Kim

Abstract: Integral imaging display (IID) is a promising technology to provide realistic 3D image without glasses. To achieve a large screen IID with a reasonable fabrication cost, a potential solution is a tiled-lens-array IID (TLA-IID). However, TLA-IIDs are subject to 3D image artifacts when there are even slight misalignments between the lens arrays. This work aims at compensating these artifacts by calibrating the lens array poses with a camera and including them in a ray model used for rendering the 3D image. Since the lens arrays are transparent, this task is challenging for traditional calibration methods. In this paper, we propose a novel calibration method based on defining a set of principle observation rays that pass lens centers of the TLA and the camera ’s optical center. The method is able to determine the lens array poses with only one camera at an arbitrary unknown position without using any additional markers. The principle observation rays are automatically extracted using a structured light based method from a dense correspondence map between the displayed and captured . pixels. . com, Experiments show that lens array misalignments xme i nlpr . ia . ac . cn @ can be estimated with a standard deviation smaller than 0.4 pixels. Based on this, 3D image artifacts are shown to be effectively removed in a test TLA-IID with challenging misalignments.

3 0.82372367 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

Author: Yu Ji, Jinwei Ye, Jingyi Yu

Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LFProbe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By . ude l. edu observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat’s Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.

4 0.81844866 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

Author: Donald G. Dansereau, Oscar Pizarro, Stefan B. Williams

Abstract: Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedurefor lenselet-basedplenoptic cameras appropriatefor a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.

5 0.81769013 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello

Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.

6 0.80965108 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

7 0.7975449 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

8 0.76070738 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

9 0.6470378 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

10 0.63770705 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging

11 0.56483567 344 cvpr-2013-Radial Distortion Self-Calibration

12 0.55828047 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

13 0.52320349 368 cvpr-2013-Rolling Shutter Camera Calibration

14 0.51369429 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas

15 0.48897633 409 cvpr-2013-Spectral Modeling and Relighting of Reflective-Fluorescent Scenes

16 0.48765054 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

17 0.48320591 127 cvpr-2013-Discovering the Structure of a Planar Mirror System from Multiple Observations of a Single Point

18 0.45799586 286 cvpr-2013-Mirror Surface Reconstruction from a Single Image

19 0.44886479 395 cvpr-2013-Shape from Silhouette Probability Maps: Reconstruction of Thin Objects in the Presence of Silhouette Extraction and Calibration Error

20 0.42426026 454 cvpr-2013-Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.144), (16, 0.079), (20, 0.248), (26, 0.037), (33, 0.217), (67, 0.034), (69, 0.04), (83, 0.014), (87, 0.082)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80805802 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

Author: Timothy Yau, Minglun Gong, Yee-Hong Yang

Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.

2 0.73037869 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

Author: Amit Agrawal, Srikumar Ramalingam

Abstract: Imaging systems consisting of a camera looking at multiple spherical mirrors (reflection) or multiple refractive spheres (refraction) have been used for wide-angle imaging applications. We describe such setups as multi-axial imaging systems, since a single sphere results in an axial system. Assuming an internally calibrated camera, calibration of such multi-axial systems involves estimating the sphere radii and locations in the camera coordinate system. However, previous calibration approaches require manual intervention or constrained setups. We present a fully automatic approach using a single photo of a 2D calibration grid. The pose of the calibration grid is assumed to be unknown and is also recovered. Our approach can handle unconstrained setups, where the mirrors/refractive balls can be arranged in any fashion, not necessarily on a grid. The axial nature of rays allows us to compute the axis of each sphere separately. We then show that by choosing rays from two or more spheres, the unknown pose of the calibration grid can be obtained linearly and independently of sphere radii and locations. Knowing the pose, we derive analytical solutions for obtaining the sphere radius and location. This leads to an interesting result that 6-DOF pose estimation of a multi-axial camera can be done without the knowledge of full calibration. Simulations and real experiments demonstrate the applicability of our algorithm.

3 0.726771 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

4 0.72631526 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

Author: Yu Ji, Jinwei Ye, Jingyi Yu

Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LFProbe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By . ude l. edu observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat’s Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.

5 0.72514689 163 cvpr-2013-Fast, Accurate Detection of 100,000 Object Classes on a Single Machine

Author: Thomas Dean, Mark A. Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan, Jay Yagnik

Abstract: Many object detection systems are constrained by the time required to convolve a target image with a bank of filters that code for different aspects of an object’s appearance, such as the presence of component parts. We exploit locality-sensitive hashing to replace the dot-product kernel operator in the convolution with a fixed number of hash-table probes that effectively sample all of the filter responses in time independent of the size of the filter bank. To show the effectiveness of the technique, we apply it to evaluate 100,000 deformable-part models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multi-core processor with 20GB of RAM. This represents a speed-up of approximately 20,000 times— four orders of magnitude— when compared withperforming the convolutions explicitly on the same hardware. While mean average precision over the full set of 100,000 object classes is around 0.16 due in large part to the challenges in gathering training data and collecting ground truth for so many classes, we achieve a mAP of at least 0.20 on a third of the classes and 0.30 or better on about 20% of the classes.

6 0.72459817 115 cvpr-2013-Depth Super Resolution by Rigid Body Self-Similarity in 3D

7 0.72352844 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras

8 0.72314781 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

9 0.72295344 360 cvpr-2013-Robust Estimation of Nonrigid Transformation for Point Set Registration

10 0.72229856 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

11 0.72188479 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

12 0.72148323 331 cvpr-2013-Physically Plausible 3D Scene Tracking: The Single Actor Hypothesis

13 0.72044003 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation

14 0.72039741 286 cvpr-2013-Mirror Surface Reconstruction from a Single Image

15 0.72018301 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path

16 0.71993899 454 cvpr-2013-Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction

17 0.71956223 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image

18 0.71897542 143 cvpr-2013-Efficient Large-Scale Structured Learning

19 0.71871084 118 cvpr-2013-Detecting Pulse from Head Motions in Video

20 0.71851856 248 cvpr-2013-Learning Collections of Part Models for Object Recognition