cvpr cvpr2013 cvpr2013-27 knowledge-graph by maker-knowledge-mining

27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation


Source: pdf

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 fr Abstract 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. [sent-3, score-1.345]

2 A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. [sent-5, score-0.378]

3 Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. [sent-6, score-0.641]

4 Introduction Reconstruction of transparent objects has gathered interest in the last few years [5, 8, 11, 12, 13, 14, 17]. [sent-14, score-0.501]

5 Methods could be broadly classified into approaches that rely on physical (material) properties of transparent objects, and approaches that try to extend traditional shape acquisition approaches to the case of transparent objects. [sent-16, score-0.999]

6 Among the approaches relying on material properties, geometric and radiometric cues are the most prominent inputs to reconstruction algorithms. [sent-17, score-0.422]

7 fr Figure 1: Given a transparent refractive object, a light source and a camera, the pixel q receives light from sources at positions Q1 and Q2. [sent-20, score-1.307]

8 In both works Q1/Q2 is estimated by using a calibrated computer monitor as light source (CRT monitor in our case). [sent-26, score-0.877]

9 In this paper, we focus on specular trans- parent objects, that is objects for which incoming light is partly reflected off the surface and partly entering the object, after undergoing a refraction at the surface. [sent-28, score-0.831]

10 While these works consider specular surfaces (both mirrors and transparent surfaces), they do not utilize the fact that transparent objects leave a shape dependent radiometric signature in images. [sent-32, score-1.366]

11 Since transparent surfaces also reflect light falling on 111444333866 them, photometric stereo and related algorithms have also been proposed to reconstruct the exterior of transparent objects [20, 13]. [sent-33, score-1.346]

12 While reconstruction using such methods gives good results, we argue that modeling transparent surfaces as pure mirrors discards important information about their physical properties (refractive index, internal structure, etc. [sent-35, score-0.659]

13 For every “light-path” that we capture, we record both geometric information (position and direction of light rays captured by and originating from light source, depending on requirement) and radiometric information (radiance of light at the beginning and end of each light-path). [sent-39, score-1.004]

14 A first main result is that this allows to reconstruct a transparent object from a single configuration of the acquisition setup, i. [sent-40, score-0.613]

15 Related Work In the past, several approaches have used either geometric or radiometric cues to reconstruct transparent surfaces. [sent-55, score-0.869]

16 Geometric approaches typically measure the deviation from perspective imaging produced by a refractive transparent object, and recover the shape as a solution that explains the observation [21]. [sent-56, score-0.806]

17 In [17], the authors present a minimal case study of the conditions in which refractive surfaces can be reconstructed. [sent-57, score-0.428]

18 They re-cast transparent object reconstruction as the task of reconstructing the path of each individual light-path that is captured by a camera after refraction through a transparent surface. [sent-58, score-1.292]

19 Other examples of shape recovery from distortion analysis include the more recent work by [13], which analyzes the specific case of a single dynamic transparent surface that distorts a known background and is observed by multiple cameras. [sent-60, score-0.591]

20 Apart from geometry, radiometric information also turns out to be very important in the case of transparent objects since they simultaneously reflect and refract light. [sent-62, score-0.767]

21 Since they reflect light like a specular surface, many recent photometric approaches have tried to reconstruct transparent surfaces by studying their specularities. [sent-63, score-0.927]

22 While [20] provides a low cost approach to reconstruction by studying specular highlights, [13] shows how to reconstruct transparent surfaces with inhomogeneous refractive interiors, by measuring highlights multiple times to remove extraneous effects like scattering, interreflections etc. [sent-64, score-1.333]

23 When unpolarized light is reflected or transmitted across a dielectric refractive surface, it gets partially polarized. [sent-66, score-0.898]

24 Since refractive objects present a challenging reconstruction problem, many authors have resorted to using active approaches for reconstructions. [sent-71, score-0.474]

25 Reflection and Refraction Caused by Transparent Surfaces In this section, we describe an image formation model for transparent surfaces, that forms the basis of our reconstruction approach. [sent-74, score-0.577]

26 Let X be a calibrated point light source emitting unpolarized light. [sent-76, score-0.414]

27 By calibrated, we mean here that we know the “amount” of light emitted in every direction (see section 6 for practical approaches/considerations). [sent-77, score-0.314]

28 If the light ray hits a transparent surface, part of its energy gets reflected in a mirror direction and part enters the transparent object, after undergoing a refraction at the surface. [sent-79, score-1.636]

29 Both the reflection and the refraction happen within a plane of refraction π that is spanned by the point of intersection of the light ray and the surface, and the surface normal at that point. [sent-80, score-0.99]

30 The geometric aspects of reflection and refraction 111444333977 Figure 2: (left top) Description ofthe general theory behind our approach. [sent-81, score-0.334]

31 While the acquisition is similar to that of Kutulakos et al [17], we also include radiance in our measurements (depicted by the changing color of rays while they travel from the illuminant to the camera pixel). [sent-82, score-0.529]

32 A camera facing a monitor with transparent objects in between. [sent-90, score-0.869]

33 Let θ1 be the angle between incoming light ray and the surface normal (cf. [sent-92, score-0.666]

34 Figure The reflected light ray forms the opposite angle θ1 with normal whereas the angle θ2, formed by the refracted and the normal, is given by: sinθ2= sinθ1nn12 where n1 and n2 the 2). [sent-93, score-0.845]

35 the ray (1) are the refractive indices of the media on both sides of the surface (n2 for the inside, n1 for the outside, where the light source is located). [sent-94, score-0.866]

36 Let I the be irradiance of the light source in the considered direction. [sent-96, score-0.292]

37 The irradiance of the reflected ray (respectively refracted ray), is given by [1]: Ir = It = 2ssiinn22((θθ11−+ θ θ22))? [sent-97, score-0.465]

38 Note that even for unpolarized light sources, the reflected and refracted light will in general be polarized. [sent-100, score-0.865]

39 if light gets reflected or refracted a second time (see section 5. [sent-103, score-0.531]

40 Surface Depth Reconstruction: Reflection Surface reconstruction is done for individual camera pixels, by estimating the depth of the surface along the lines of sight of pixels. [sent-112, score-0.474]

41 Let d be the depth of the object along this line of sight, P be the intersection point of the object surface and the line of sight, and ˆn the surface normal at that point. [sent-120, score-0.412]

42 3), we know the point Q on the monitor whose reflection is seen in the pixel. [sent-123, score-0.381]

43 We do so by first computing the incident angle θ1 between the surface normal 111444334088 and the incident light ray, after which d is trivial. [sent-125, score-0.571]

44 Since our setup is radiometrically calibrated, we equate the radiance ratio r = IIr to the reflected and refracted an- gles usinrg = eq2usaisnitni2o2(nθ(θ1 21−+ θ θ22) ? [sent-126, score-0.635]

45 Note that although we are considering the case of reflec- tion here, the refracted angle θ2 nevertheless appears in the equation, due to the object’s refractive property. [sent-128, score-0.52]

46 Second, θ1 is typically (much) larger than 30◦, due to the practical setup which requires that the camera have both a reflected and a direct view of the monitor. [sent-149, score-0.37]

47 Consider the graph of r as a function of θ1 for the refractive index of water (n2 = 1. [sent-150, score-0.464]

48 From images acquired with a completely static setup, we are able to compute the depth of the transparent object, for each pixel in which a reflection is visible. [sent-157, score-0.692]

49 To do so, we need to know the refractive index of the object’s material. [sent-158, score-0.389]

50 Note that in the same scenario, we have much better normal information because of radiometric information. [sent-166, score-0.306]

51 While LP-2 is robust because correspondences are far away, its highly impractical since use of LCD’s for correspondence is problematic (because of light fall-off, scattering etc. [sent-167, score-0.329]

52 The main difference is however that in order to observe the refraction, the camera must be inside the same medium as the object or, be located in a medium with the same refractive index as that of the object. [sent-171, score-0.476]

53 For example, a camera looking at a monitor immersed in water (or, more realistically, put next to an aquarium’s bottom plate), might allow to reconstruct the water’s surface. [sent-172, score-0.557]

54 In practice however, we require dense matches between the camera image and the CRT monitor, which is why we acquire multiple images of a sequence of Gray-coded patterns displayed on the monitor [2]. [sent-176, score-0.368]

55 for each individual camera pixel we determine the point on the monitor whose reflection or refraction in the object, is seen by that pixel. [sent-179, score-0.626]

56 Unknown Refractive Index The above approach requires knowledge of the object’s refractive index. [sent-186, score-0.336]

57 We thus have a total of three unknowns, the refractive index (or, the ratio n12 of refractive indices of object and air) and the two angles θ1, and three constraints: the two constraints of form (4) and one that links rhe two angles θ1 (ref Figure 2 [1]). [sent-191, score-0.756]

58 However, even when light is only reflected off a transparent object surface, equations (4) can be used to solve for relative refractive index n12 = nn21, even from a single pixel. [sent-193, score-1.254]

59 Thus, we have extended the scenarios where reconstruction is possible to the case of reflection off the object surface, when refractive index is unknown. [sent-194, score-0.596]

60 if the camera acquires an image of the monitor through a transparent object, with refractions happening both at its front and back sides. [sent-200, score-0.898]

61 Let us reconsider equations (2) and (3): they indicate how much energy of the incoming light is reflected respectively refracted. [sent-204, score-0.439]

62 In addition however, for dielectric materials, the reflected and refracted light is polarized, even if the incoming light is not. [sent-205, score-0.845]

63 It can be shown that polarization adds a factor to the radiance ratio that is dependent on the angle between the two planes of reflection/refraction encountered along a single light-path. [sent-206, score-0.416]

64 Consider two cameras looking at a transparent object, which refracts light from a known illuminant twice. [sent-208, score-0.81]

65 In this case, radiance ratio observations are made by each camera for each light-path. [sent-211, score-0.363]

66 It is possible to show that for every light-path, there exists a 1D curve of incident angle pairs such that one angle occurs at each “bounce” of a light-path and the radiance ratio is satisfied [1]. [sent-212, score-0.418]

67 In addition to the geometric constraints expressed in [17] we have one extra constraint per light ray, so it is now possible to solve for the 3D structure of the transparent object given known refractive index. [sent-214, score-1.084]

68 An ∗ symbol represents the fact that even in the case of only reflection, the relative refractive index can be computed. [sent-217, score-0.389]

69 c oen satr uicotison of transparent objects that can be solved with the help of radiance measurements, are summarized in Table 1. [sent-221, score-0.746]

70 It is interesting to note that transparent objects have lesser minimal requirements for reconstruction than mirror like objects. [sent-222, score-0.644]

71 Practical considerations The above theory shows that the radiance of a final light ray in a light-path contains information that could be used to reconstruct the entire light-path. [sent-225, score-0.74]

72 In this section we describe important elements of our experiments to collect radiance measurements for reconstruction. [sent-226, score-0.288]

73 1) We use an illuminant with known geometry to emit unpolarized light in a desired set of directions. [sent-228, score-0.433]

74 2) Light from the illuminant interacts with the transparent object, and reflects / refracts off its surface towards the camera after one bounce. [sent-229, score-0.783]

75 3) The camera then captures both the direction and radiance of some of reflected / refracted light rays, which is used for reconstruction. [sent-230, score-0.863]

76 Since we need to capture the position and radiance of an individual light ray, we adopt the pin-hole model for the camera (smallest aperture and large focal length). [sent-232, score-0.567]

77 We arrived on an acceptable range of focal lengths that have desired depth of field and capture both the monitor and object in focus, by trial and error. [sent-234, score-0.376]

78 Finally, for each captured ray, we compute the corresponding pixel on the monitor from which the ray originated using standard methods [3]. [sent-235, score-0.424]

79 Unpolarized illuminant In our experiments, we use a flat CRT monitor (LCD montiors emit polarized light), whose pose is computed with respect to an internally calibrated camera [18]. [sent-236, score-0.576]

80 We capture the light emitted by each pixel of the monitor in 111444444200 Figure 4: (Left top) Normal map of “Fanta bottle” sequence. [sent-238, score-0.554]

81 Phenomenon like scratches on the bottle, inhomegenous refractive index, violation of single bounce through occlusion are some bad effects, but still reliable reconstructions are achieved. [sent-243, score-0.475]

82 Interreflections A common problem with measuring illumination reflected / refracted off specular transparent objects is interreflections. [sent-255, score-0.879]

83 They not only corrupt the radiance measurement, but also pose a problem to correspondence estimation. [sent-256, score-0.297]

84 Instead of using a projector to light the scene, we use the CRT monitor instead. [sent-258, score-0.516]

85 Correspondence Acquiring correspondence between pixels on the monitor and pixels on the camera that correspond to the same light-path becomes slightly cumbersome when transparent objects are involved [2]. [sent-269, score-0.921]

86 Experiments and Results In the previous sections, we showed that radiance ratios could be used to reconstruct transparent surfaces, which can help in reducing the number of measurements required for reconstruction. [sent-274, score-0.842]

87 t camera noise and refractive index mismatch are present in the supplementary materials. [sent-288, score-0.476]

88 Experiment 2: “Water Sequence” Figure 5 (Left column) shows some images acquired in order to reconstruct the surface of water in a plastic bowl. [sent-289, score-0.333]

89 Because of the planar nature of the scene, we compute correspondence by simply computing a homography between the reflected image and the direct image of a photograph displayed on the monitor. [sent-293, score-0.295]

90 Note how global compoments of the image are present even in places where there is no direct light (Figure 5, red square). [sent-302, score-0.287]

91 On the other hand, robust measurement of the position and direction of light incident on the glasses from the monitor requires a large set of images to be captured while moving the monitor over, say, an optical bench. [sent-305, score-0.843]

92 Again, like in the case of the “Water Sequence” we hypothesize various depth values along each back-projected pixel, and test its validity using computed radiance ratios. [sent-307, score-0.34]

93 Note that optimizing depth and normal simultaneously would serve to remove the artefacts seen in the figure, especially enforcing the depth-normal consistency (differentiation of depth gives normal). [sent-313, score-0.291]

94 Discussion and Conclusion Reconstruction of transparent objects remains a challenging problem because of the lack of cues that are normally available for other objects. [sent-315, score-0.542]

95 Other applications of our approach are in verifying validity of light transport matrices, radiometric calibration etc. [sent-318, score-0.492]

96 111444444422 Figure 5: (Left column) Two of 25 images used to compute the direct and global images [15] to remove the effect of interreflections and caustics on radiance measurement. [sent-320, score-0.48]

97 Shape and refractive index recovery from single-view polarisation images. [sent-386, score-0.389]

98 Shape estimation of transparent objects by using inverse polarization ray tracing. [sent-401, score-0.736]

99 Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography. [sent-412, score-0.619]

100 Adequate reconstruction of transparent objects on a shoestring budget. [sent-458, score-0.608]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('transparent', 0.47), ('refractive', 0.336), ('monitor', 0.281), ('radiance', 0.245), ('light', 0.235), ('radiometric', 0.231), ('reflected', 0.16), ('refraction', 0.158), ('ray', 0.143), ('refracted', 0.136), ('surface', 0.121), ('interreflections', 0.117), ('bounce', 0.114), ('reconstruction', 0.107), ('reflection', 0.1), ('unpolarized', 0.099), ('crt', 0.098), ('depth', 0.095), ('polarization', 0.092), ('camera', 0.087), ('reconstruct', 0.084), ('specular', 0.082), ('fanta', 0.08), ('normal', 0.075), ('water', 0.075), ('illuminant', 0.07), ('bottle', 0.064), ('sight', 0.064), ('triangulation', 0.06), ('acquisition', 0.059), ('surfaces', 0.056), ('index', 0.053), ('direct', 0.052), ('correspondence', 0.052), ('calibrated', 0.049), ('angle', 0.048), ('incident', 0.046), ('kutulakos', 0.045), ('incoming', 0.044), ('measurements', 0.043), ('geometric', 0.043), ('scattering', 0.042), ('cues', 0.041), ('practical', 0.041), ('avinash', 0.04), ('bounces', 0.04), ('caustics', 0.04), ('cms', 0.04), ('insets', 0.04), ('projet', 0.04), ('wine', 0.039), ('emitted', 0.038), ('minimal', 0.036), ('dcraw', 0.035), ('dielectric', 0.035), ('immersion', 0.035), ('radiometry', 0.035), ('refract', 0.035), ('refracts', 0.035), ('theory', 0.033), ('bowl', 0.033), ('interreflection', 0.033), ('lcd', 0.033), ('miyazaki', 0.033), ('polarized', 0.033), ('radiometrically', 0.033), ('transmitted', 0.033), ('ratio', 0.031), ('source', 0.031), ('fluorescent', 0.031), ('ihrke', 0.031), ('quartic', 0.031), ('refractions', 0.031), ('objects', 0.031), ('homography', 0.031), ('setup', 0.03), ('unique', 0.03), ('bottom', 0.03), ('acquires', 0.029), ('aliaga', 0.029), ('emit', 0.029), ('fresnel', 0.029), ('nikon', 0.029), ('inhomogeneous', 0.028), ('double', 0.028), ('acquired', 0.027), ('morris', 0.027), ('avenue', 0.027), ('internally', 0.027), ('ref', 0.027), ('highlights', 0.027), ('irradiance', 0.026), ('glass', 0.026), ('remove', 0.026), ('calibration', 0.026), ('canon', 0.026), ('plastic', 0.026), ('mirrors', 0.026), ('rays', 0.025), ('reconstructions', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999976 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

2 0.2958971 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

Author: Timothy Yau, Minglun Gong, Yee-Hong Yang

Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.

3 0.28579244 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

Author: Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-Ichiro Taniguchi

Abstract: Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance ofsuch objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light?eld image as an input and model the distortion of the light ?eld caused by the refractive property of a transparent object. We propose a new feature, called the light ?eld distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

4 0.25906354 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

Author: Yu Ji, Jinwei Ye, Jingyi Yu

Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LFProbe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By . ude l. edu observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat’s Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.

5 0.19629933 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello

Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.

6 0.18864095 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

7 0.18582433 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

8 0.1660834 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

9 0.15030961 286 cvpr-2013-Mirror Surface Reconstruction from a Single Image

10 0.13252972 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

11 0.13173598 423 cvpr-2013-Template-Based Isometric Deformable 3D Reconstruction with Sampling-Based Focal Length Self-Calibration

12 0.12858583 454 cvpr-2013-Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction

13 0.12506835 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns

14 0.12057102 330 cvpr-2013-Photometric Ambient Occlusion

15 0.11647096 410 cvpr-2013-Specular Reflection Separation Using Dark Channel Prior

16 0.11640663 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

17 0.11465125 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras

18 0.11420038 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

19 0.10815147 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

20 0.10715658 465 cvpr-2013-What Object Motion Reveals about Shape with Unknown BRDF and Lighting


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.154), (1, 0.251), (2, 0.011), (3, 0.063), (4, -0.025), (5, -0.129), (6, -0.111), (7, 0.062), (8, 0.073), (9, 0.038), (10, -0.097), (11, 0.021), (12, 0.105), (13, -0.079), (14, -0.199), (15, 0.058), (16, 0.12), (17, 0.101), (18, -0.023), (19, 0.08), (20, 0.084), (21, 0.007), (22, -0.059), (23, -0.075), (24, -0.041), (25, 0.059), (26, 0.068), (27, 0.063), (28, -0.019), (29, -0.022), (30, 0.038), (31, -0.1), (32, 0.159), (33, -0.086), (34, 0.126), (35, -0.058), (36, 0.017), (37, -0.0), (38, -0.033), (39, -0.057), (40, -0.085), (41, 0.034), (42, -0.058), (43, 0.052), (44, -0.021), (45, 0.056), (46, -0.036), (47, 0.088), (48, -0.022), (49, -0.054)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92805845 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

2 0.87426716 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

Author: Yu Ji, Jinwei Ye, Jingyi Yu

Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LFProbe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By . ude l. edu observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat’s Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.

3 0.85468805 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

Author: Timothy Yau, Minglun Gong, Yee-Hong Yang

Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.

4 0.79845834 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

Author: Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-Ichiro Taniguchi

Abstract: Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance ofsuch objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light?eld image as an input and model the distortion of the light ?eld caused by the refractive property of a transparent object. We propose a new feature, called the light ?eld distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

5 0.7122497 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

Author: Weiming Li, Haitao Wang, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Xing Mei, Tao Hong, Hoyoung Lee, Jiyeun Kim

Abstract: Integral imaging display (IID) is a promising technology to provide realistic 3D image without glasses. To achieve a large screen IID with a reasonable fabrication cost, a potential solution is a tiled-lens-array IID (TLA-IID). However, TLA-IIDs are subject to 3D image artifacts when there are even slight misalignments between the lens arrays. This work aims at compensating these artifacts by calibrating the lens array poses with a camera and including them in a ray model used for rendering the 3D image. Since the lens arrays are transparent, this task is challenging for traditional calibration methods. In this paper, we propose a novel calibration method based on defining a set of principle observation rays that pass lens centers of the TLA and the camera ’s optical center. The method is able to determine the lens array poses with only one camera at an arbitrary unknown position without using any additional markers. The principle observation rays are automatically extracted using a structured light based method from a dense correspondence map between the displayed and captured . pixels. . com, Experiments show that lens array misalignments xme i nlpr . ia . ac . cn @ can be estimated with a standard deviation smaller than 0.4 pixels. Based on this, 3D image artifacts are shown to be effectively removed in a test TLA-IID with challenging misalignments.

6 0.69097406 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

7 0.67997426 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

8 0.66744787 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

9 0.65031999 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

10 0.62757587 286 cvpr-2013-Mirror Surface Reconstruction from a Single Image

11 0.61496031 410 cvpr-2013-Specular Reflection Separation Using Dark Channel Prior

12 0.61470014 127 cvpr-2013-Discovering the Structure of a Planar Mirror System from Multiple Observations of a Single Point

13 0.59683865 409 cvpr-2013-Spectral Modeling and Relighting of Reflective-Fluorescent Scenes

14 0.59032017 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

15 0.56883568 454 cvpr-2013-Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction

16 0.52897662 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging

17 0.51679629 330 cvpr-2013-Photometric Ambient Occlusion

18 0.50906742 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

19 0.4886902 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas

20 0.48564312 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.114), (16, 0.416), (26, 0.026), (33, 0.198), (66, 0.011), (67, 0.037), (69, 0.026), (87, 0.078)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.91370308 410 cvpr-2013-Specular Reflection Separation Using Dark Channel Prior

Author: Hyeongwoo Kim, Hailin Jin, Sunil Hadap, Inso Kweon

Abstract: We present a novel method to separate specular reflection from a single image. Separating an image into diffuse and specular components is an ill-posed problem due to lack of observations. Existing methods rely on a specularfree image to detect and estimate specularity, which however may confuse diffuse pixels with the same hue but a different saturation value as specular pixels. Our method is based on a novel observation that for most natural images the dark channel can provide an approximate specular-free image. We also propose a maximum a posteriori formulation which robustly recovers the specular reflection and chromaticity despite of the hue-saturation ambiguity. We demonstrate the effectiveness of the proposed algorithm on real and synthetic examples. Experimental results show that our method significantly outperforms the state-of-theart methods in separating specular reflection.

2 0.79943788 118 cvpr-2013-Detecting Pulse from Head Motions in Video

Author: Guha Balakrishnan, Fredo Durand, John Guttag

Abstract: We extract heart rate and beat lengths from videos by measuring subtle head motion caused by the Newtonian reaction to the influx of blood at each beat. Our method tracks features on the head and performs principal component analysis (PCA) to decompose their trajectories into a set of component motions. It then chooses the component that best corresponds to heartbeats based on its temporal frequency spectrum. Finally, we analyze the motion projected to this component and identify peaks of the trajectories, which correspond to heartbeats. When evaluated on 18 subjects, our approach reported heart rates nearly identical to an electrocardiogram device. Additionally we were able to capture clinically relevant information about heart rate variability.

same-paper 3 0.77989167 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

4 0.75325722 224 cvpr-2013-Information Consensus for Distributed Multi-target Tracking

Author: Ahmed T. Kamal, Jay A. Farrell, Amit K. Roy-Chowdhury

Abstract: Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multitarget tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms work by exchanging information between sensors that are communication neighbors. Since most cameras are directional sensors, it is often the case that neighboring sensors may not be sensing the same target. Such sensors that do not have information about a target are termed as “naive ” with respect to that target. In this paper, we propose consensus-based distributed multi-target tracking algorithms in a camera network that are designed to address this issue of naivety. The estimation errors in tracking and data association, as well as the effect of naivety, are jointly addressed leading to the development of an informationweighted consensus algorithm, which we term as the Multitarget Information Consensus (MTIC) algorithm. The incorporation of the probabilistic data association mecha- nism makes the MTIC algorithm very robust to false measurements/clutter. Experimental analysis is provided to support the theoretical results.

5 0.73432064 271 cvpr-2013-Locally Aligned Feature Transforms across Views

Author: Wei Li, Xiaogang Wang

Abstract: In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person reidentification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsityinducing norm and information theoretical regularization. . cuhk . edu .hk (a) Camera view A (b) Camera view B This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.

6 0.72986549 363 cvpr-2013-Robust Multi-resolution Pedestrian Detection in Traffic Scenes

7 0.72304893 138 cvpr-2013-Efficient 2D-to-3D Correspondence Filtering for Scalable 3D Object Recognition

8 0.69410139 403 cvpr-2013-Sparse Output Coding for Large-Scale Visual Recognition

9 0.67477405 326 cvpr-2013-Patch Match Filter: Efficient Edge-Aware Filtering Meets Randomized Search for Fast Correspondence Field Estimation

10 0.63731378 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

11 0.62211347 361 cvpr-2013-Robust Feature Matching with Alternate Hough and Inverted Hough Transforms

12 0.62042797 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

13 0.61892021 454 cvpr-2013-Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction

14 0.60550988 115 cvpr-2013-Depth Super Resolution by Rigid Body Self-Similarity in 3D

15 0.59777296 130 cvpr-2013-Discriminative Color Descriptors

16 0.59586185 352 cvpr-2013-Recovering Stereo Pairs from Anaglyphs

17 0.5924623 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

18 0.59052277 384 cvpr-2013-Segment-Tree Based Cost Aggregation for Stereo Matching

19 0.58465397 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

20 0.58410817 54 cvpr-2013-BRDF Slices: Accurate Adaptive Anisotropic Appearance Acquisition