cvpr cvpr2013 cvpr2013-269 knowledge-graph by maker-knowledge-mining

269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition


Source: pdf

Author: Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-Ichiro Taniguchi

Abstract: Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance ofsuch objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light?eld image as an input and model the distortion of the light ?eld caused by the refractive property of a transparent object. We propose a new feature, called the light ?eld distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance ofsuch objects dramatically varies with changes in scene background. [sent-2, score-0.57]

2 Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. [sent-3, score-0.454]

3 eld image as an input and model the distortion of the light ? [sent-5, score-0.567]

4 eld caused by the refractive property of a transparent object. [sent-6, score-0.754]

5 The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. [sent-9, score-0.376]

6 However, these features and learning algorithms cannot apply to transparent objects. [sent-15, score-0.368]

7 lled with many transparent objects, such as glasses, bottles, bowls, jars, vases, and windows, to name a few. [sent-18, score-0.348]

8 Depending on backgrounds, whether stationary or moving, the appearance ofa transparent object drastically changes, as such objects do not have features entirely of their own, but rather transmitted background images. [sent-19, score-0.505]

9 Using the standard approaches requires modeling local features not of the transparent object but the background. [sent-20, score-0.399]

10 In this paper, we propose a novel object feature, called the light ? [sent-21, score-0.204]

11 We discuss how the LFD feature models and visually recognizes transparent objects, which to date have been ignored as exceptions in applications of visual object recognition or annotation. [sent-23, score-0.436]

12 Transparent objects are made of refractive materials, such as glass or plastics, and distort rays emanating from the scene background. [sent-24, score-0.14]

13 Different objects produce different distortions, each carrying intrinsic characteristics of the transparent object, namely the refractive index of material and the object’s shape, both of which in? [sent-25, score-0.449]

14 Compared with conventional cameras, which capture 2D photos from a single perspective, light ? [sent-31, score-0.185]

15 eld cameras obtain richer 4D images that include multiple 2D viewpoints (position resolution) as well as standard 2D image coordinates (angular resolutions). [sent-32, score-0.418]

16 The LFD feature models the distortion from differences in corresponding points between viewpoints including the 4D light ? [sent-33, score-0.348]

17 This is an entirely original concept for feature description with the advantage that LFD is less affected by background changes, as it uses patterns of ray distortions caused by the object, not patterns from the object’s appearance . [sent-35, score-0.208]

18 eld camera was originally proposed for image-based rendering for use in the graphics community, and has been used for a variety of different visualization applications, such as generating free-view images, 3D graphics, and digital refocusing. [sent-37, score-0.382]

19 eld cameras consisting of a micro-lens array between the sensor and main lens are becoming inexpensive and compact [2, 3] . [sent-43, score-0.375]

20 , some of these are available in the product market [4, 5, 6] Hence, we believe that the light ? [sent-44, score-0.173]

21 eld camera is becoming a popular input device in computer vision applications. [sent-45, score-0.37]

22 cult computer vision problem, transparent object recognition with a single-shot image, 2) in proposing a new feature 222777888644 for transparent object recognition, called the LFD feature, and 3) in applying a light ? [sent-48, score-1.022]

23 ed transparent objects without explicit physics-based refraction analysis and refraction models. [sent-52, score-0.664]

24 Therefore, if local features are drawn from a more dominant background than an object’s surface, existing learning and recognition methods perform poorly. [sent-58, score-0.125]

25 A transparent object yields less information about its appearr. [sent-59, score-0.379]

26 In consequence, extracting scene-independent local features from a transparent object area is dif? [sent-62, score-0.399]

27 In other directions, there has been much research on measuring refraction responses in transparent objects using cameras to obtain physical parameters, such as surface curvature or refractive index. [sent-66, score-0.638]

28 measured light intensities from transparent objects through polarizing ? [sent-69, score-0.551]

29 This method visualizes the refraction response in a scene as a gray-scale or color image by using special optics, although it requires high-quality optics and precise alignment. [sent-74, score-0.157]

30 eld background-oriented Schlieren photography that obtains Schlieren photos using a common hand-held camera and a special-purpose optical sheet. [sent-78, score-0.429]

31 Although this technique recovers the transparent surface [16] it also has restricted practical use as the special sheet is always required t. [sent-79, score-0.365]

32 Background distortion from changing viewpoints as a background object. [sent-83, score-0.223]

33 [19] used two calibrated cameras to estimate the refractive indices over time-varying liquid surfaces from distortions of known grid patterns at the bottom of a tank. [sent-91, score-0.145]

34 In contrast to these approaches, the novelty of our work is to apply refraction to transparent object recognition, realized from a single shot image, using a light ? [sent-92, score-0.723]

35 Unlike previous methods, there are no constraints on background texture, camera motion or known parameters. [sent-94, score-0.128]

36 Light Field Distortion Feature By refraction, a transparent object deforms the background scene. [sent-96, score-0.455]

37 Different objects produce different images of the same scene (Figure 1), because refraction by objects is affected by shape and refractive index. [sent-97, score-0.274]

38 Using the background distortion caused by refraction is our means to recognize transparent objects. [sent-98, score-0.672]

39 In fact, we modeled the background distortion to the appearance difference from different perspectives (Figure 2). [sent-99, score-0.152]

40 In theory, the modeled distortion itself is independent of background texture, although the background determines image appearance, the distortion for corresponding points from different viewpoints is maintained. [sent-100, score-0.375]

41 Therefore, our proposal is to model the object’s refraction as a distortion of multiple viewpoints cap222777888755 (a)Undsi toredlight? [sent-101, score-0.29]

42 ne the LFD feature and outline its use in transparent object recognition. [sent-108, score-0.407]

43 eld is a function that describes the amount of light emitting in every direction from every point in a scene. [sent-110, score-0.491]

44 Conventional cameras only record that part of the light ? [sent-111, score-0.202]

45 eld passing through a single viewpoint of a 2D image. [sent-112, score-0.318]

46 Here, we use the 4D-ray representation of the light ? [sent-118, score-0.173]

47 eld L(s, t, u, v) determined by the intersection of a plane (s, t) and a slant of ray (u, v) (see Figure 3). [sent-119, score-0.339]

48 Figure 4(a) illustrates the functioning of a camera array and shows the relation between light ? [sent-120, score-0.253]

49 Figure 4(a) depicts a scene where there is no object be- tween background and camera; i. [sent-124, score-0.107]

50 As illustrated, if rays emitted from a point in the background are straight, the observed light ? [sent-128, score-0.271]

51 eld has constant disparities over the images for the different viewpoints. [sent-129, score-0.357]

52 In fact, these rays are distributed on a hyperplane in stuv-space because the actual light ? [sent-131, score-0.214]

53 In contrast, if a transparent object intervenes between background and camera, the ray distribution deviates from the line or the hyperplane (Figure 4(b)). [sent-133, score-0.495]

54 This LFD is caused by refraction occurring within the object, which is characterized by the material (refractive index) and shape of the transparent object. [sent-134, score-0.508]

55 We call this the LDF feature that is to be used as a feature in transparent object recognition. [sent-135, score-0.435]

56 Transparent object recognition In this section, we describe an algorithm of our transparent object recognition. [sent-149, score-0.439]

57 This camera system has 25 VGA resolution (640 ×480 pixels) cameras and can simultaneously capture images f8r0om pi x25e viewpoints ( a5n dho criazno snitmalu viewpoints 5 vertical viewpoints). [sent-154, score-0.223]

58 eld image we uses the colors for indicating LFD patterns in this paper. [sent-165, score-0.339]

59 We estimated the disparities between the center and the × other 24 viewpoints by the optical ? [sent-166, score-0.122]

60 The LFD is also an example of 3 3 case; these images are actually taken from a 25-viewpoint light c? [sent-175, score-0.173]

61 coming from the transparent object has a larger distortion than these from background, since the disparities contain refraction effect and deviate from hyperplane assumed as Lambertian re? [sent-178, score-0.656]

62 We represent classes of transparent objects as patterns of histograms of the visual words. [sent-188, score-0.399]

63 cation of transparent objects in a laboratory setting and real environments under following assumptions; • • • • There is one transparent object as a recognition target iTnh a scene . [sent-196, score-0.88]

64 The target object appears in all of the viewpoints of the light a? [sent-197, score-0.288]

65 Relative positions and poses of the camera and target object are aoslmitoiosnt same pbeotsweese onf a training raan dan testings. [sent-199, score-0.108]

66 ve background patterns We performed some experiments in a laboratory and real setting to evaluate robustness and limitations of our proposed method. [sent-206, score-0.143]

67 We used as a reference position for LFD learning a setting where camera position was 40 cm in front and background was 150 cm behind the object position. [sent-212, score-0.352]

68 Our task is classifying seven various shapes of the objects (Figure 8) into the seven classes under the various background textures. [sent-214, score-0.148]

69 Also the LFDs came from the different regions of the ob- Object A Object B Object C (a) Different objects with same background Background A Background B Background C (b) Same object with different backgrounds × Figure 10. [sent-219, score-0.174]

70 rmed that LFD feature is irrespective of the background difference, since the feature models not intensity pattern but geometrical distortion cased by object refraction. [sent-232, score-0.286]

71 cation accuracy over the 7 object classes in front of 5 different backgrounds, although it realized transparent object recognition from a single-shot image. [sent-234, score-0.489]

72 Recognition ratios for camera position changes octReoiraongi0 0 0. [sent-243, score-0.156]

73 Recognition ratios for background position changes the camera against real backgrounds of structures at various depths, i. [sent-246, score-0.283]

74 This result shows that local features are unsuitable for transparent object recognition. [sent-254, score-0.399]

75 We changed camera and background positions, object poses and lighting conditions for the evaluation. [sent-262, score-0.215]

76 Recognition ratios for object pose changes Lighting angle [degree] Figure 15. [sent-267, score-0.105]

77 Recognition ratios for illumination changes 12-15 show decreases ofthe recognition accuracies by these changes from the reference setting on the learning step. [sent-268, score-0.137]

78 We moved the camera over a range ±10 cm from the refeWreen cmeo pvoesditi tohne 4ca0m cm. [sent-269, score-0.12]

79 Fovigeurre a 1ra2n sgheo w±s1 0th catm mth fer recognition ratios are decreased when the displacement form the reference position is increased, because the LFD feature are changed from the learned pattern related to the distance between the camera and object. [sent-270, score-0.27]

80 5 cm if we accept 20% decrease in the recognition ratio. [sent-272, score-0.106]

81 We also moved the background position over the range of 50 cm to 250 cm from the object, while the reference position of the background is 150 cm. [sent-273, score-0.345]

82 Figure 13 shows the recognition ratio decreased when the background displaced from the reference position. [sent-274, score-0.173]

83 The ratio is not so changed when the background is away from the object, while it is steeply decreased when the background position is approaching to the object. [sent-277, score-0.263]

84 The background position did not affect much about the recognition ratio and we can apply this method to more realistic no planer scene background, if we can assume that the background objects of the scene are places reasonably far positions, e. [sent-279, score-0.261]

85 The ratio of asymmetric group was decreased gradually and the limitation on object pose variation is within 10 degree if we accept a 20% degradation in recognition ratios. [sent-286, score-0.145]

86 We placed an additional point light source to the global illumination that was used in the all of the experiments. [sent-288, score-0.185]

87 We changed the direction of the light source from above (0 degree) to the side (90 degree) with respect to the target object. [sent-289, score-0.214]

88 ections from the light source and these effects were changed as we moved the light source. [sent-292, score-0.417]

89 gure shows that the internal and specular from the light source contaminated the LFDs and decreases averagely 20% of the recognition. [sent-296, score-0.241]

90 Analysis for Texture Density The background patterns used in the experiment have complex textures (see Figure 9) from which correspondence detection can be easily performed. [sent-300, score-0.112]

91 In Figure 16(a), the LFD features were extracted from only the edges of the transparent object, with no LFD feature taken interior to the object. [sent-305, score-0.396]

92 For another background (Figure 16(b)), LFD features were wrongly extracted exterior to the transparent object (see the top-left part of the ? [sent-306, score-0.475]

93 First, to obtain ideal feature points, a dot pattern was displayed as a background to the transparent object for easy to detect the correspondence of the LFDs. [sent-310, score-0.498]

94 Therefore, our proposed LFD feature is considered effective in transparent object recognition. [sent-337, score-0.407]

95 The recognition ratios across different standard deviation of noise (Figure 18) show that ratios was decreased when error levels was increased. [sent-341, score-0.172]

96 Conclusion This paper proposed a novel feature termed the LFD feature, which models refraction in objects as distortions between multiple views captured by a light ? [sent-346, score-0.398]

97 Our method using the LFD feature achieved on average 70% accuracy with seven objects against different backgrounds as assessed by leave222777999200 one-out cross-validation in real environments, hence verifying the effectiveness of our LFD feature. [sent-350, score-0.13]

98 We also evaluated the robustness and limitation of the proposed method under various conditions such as: camera and background positions, object poses, and lighting conditions. [sent-351, score-0.189]

99 In conclusion, we have been successful in: 1) producing a transparent object recognition approach based on a single-shot image, 2) employing a novel feature, namely refraction, for transparent object recognition, and 3) introducing the light ? [sent-353, score-0.96]

100 Nevertheless, the recognition accuracy did not exceed 70% and it is not so high in light of applications. [sent-358, score-0.202]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('lfd', 0.786), ('transparent', 0.348), ('eld', 0.318), ('light', 0.173), ('refraction', 0.143), ('lfds', 0.089), ('background', 0.076), ('distortion', 0.076), ('refractive', 0.071), ('viewpoints', 0.071), ('schlieren', 0.067), ('gure', 0.056), ('ratios', 0.055), ('camera', 0.052), ('cm', 0.05), ('classi', 0.042), ('disparities', 0.039), ('backgrounds', 0.037), ('photography', 0.035), ('cult', 0.034), ('decreased', 0.033), ('bof', 0.033), ('laboratory', 0.032), ('dif', 0.032), ('object', 0.031), ('miyazaki', 0.031), ('wetzstein', 0.031), ('position', 0.03), ('objects', 0.03), ('recognition', 0.029), ('cameras', 0.029), ('phase', 0.028), ('changed', 0.028), ('array', 0.028), ('feature', 0.028), ('cased', 0.025), ('ections', 0.025), ('maeno', 0.025), ('shadowgraph', 0.025), ('toredlight', 0.025), ('distortions', 0.024), ('taniguchi', 0.022), ('rmed', 0.022), ('farneback', 0.022), ('ection', 0.022), ('suf', 0.022), ('rays', 0.022), ('acquisition', 0.021), ('ray', 0.021), ('patterns', 0.021), ('seven', 0.021), ('nagahara', 0.021), ('ratio', 0.02), ('features', 0.02), ('settles', 0.02), ('hyperplane', 0.019), ('changes', 0.019), ('horowitz', 0.019), ('front', 0.018), ('environments', 0.018), ('sift', 0.018), ('moved', 0.018), ('ikeuchi', 0.017), ('morris', 0.017), ('cation', 0.017), ('raskar', 0.017), ('caused', 0.017), ('frequent', 0.017), ('primal', 0.017), ('disparity', 0.017), ('degree', 0.017), ('glass', 0.017), ('heidrich', 0.017), ('polarization', 0.017), ('surface', 0.017), ('lighting', 0.016), ('ned', 0.016), ('accept', 0.015), ('correspondence', 0.015), ('hue', 0.015), ('saturation', 0.015), ('realized', 0.015), ('reference', 0.015), ('real', 0.014), ('evaluated', 0.014), ('optics', 0.014), ('simulation', 0.013), ('shot', 0.013), ('target', 0.013), ('con', 0.012), ('surf', 0.012), ('optical', 0.012), ('recognize', 0.012), ('decrease', 0.012), ('photos', 0.012), ('placed', 0.012), ('tracking', 0.012), ('poses', 0.012), ('visualization', 0.012), ('specular', 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

Author: Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-Ichiro Taniguchi

Abstract: Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance ofsuch objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light?eld image as an input and model the distortion of the light ?eld caused by the refractive property of a transparent object. We propose a new feature, called the light ?eld distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

2 0.28579244 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

3 0.15162888 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

Author: Timothy Yau, Minglun Gong, Yee-Hong Yang

Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.

4 0.13215564 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

Author: Yu Ji, Jinwei Ye, Jingyi Yu

Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LFProbe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By . ude l. edu observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat’s Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.

5 0.097363688 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

Author: Bastian Goldluecke, Sven Wanner

Abstract: Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.

6 0.093653813 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

7 0.090111859 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

8 0.07789816 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns

9 0.077783369 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

10 0.061053153 344 cvpr-2013-Radial Distortion Self-Calibration

11 0.060451653 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

12 0.059309445 330 cvpr-2013-Photometric Ambient Occlusion

13 0.057364389 55 cvpr-2013-Background Modeling Based on Bidirectional Analysis

14 0.05367646 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

15 0.052844044 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

16 0.04130403 117 cvpr-2013-Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera

17 0.040471453 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

18 0.038138963 332 cvpr-2013-Pixel-Level Hand Detection in Ego-centric Videos

19 0.035046745 445 cvpr-2013-Understanding Bayesian Rooms Using Composite 3D Object Models

20 0.034869157 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.087), (1, 0.093), (2, 0.006), (3, 0.019), (4, -0.01), (5, -0.055), (6, -0.041), (7, 0.008), (8, 0.031), (9, 0.028), (10, -0.053), (11, 0.029), (12, 0.104), (13, -0.054), (14, -0.152), (15, 0.043), (16, 0.077), (17, 0.038), (18, 0.006), (19, 0.056), (20, 0.078), (21, 0.029), (22, -0.023), (23, -0.072), (24, -0.027), (25, 0.015), (26, 0.058), (27, 0.063), (28, -0.035), (29, -0.017), (30, 0.018), (31, -0.054), (32, 0.082), (33, -0.058), (34, 0.061), (35, -0.03), (36, -0.006), (37, 0.002), (38, -0.006), (39, -0.032), (40, -0.098), (41, 0.027), (42, -0.02), (43, 0.018), (44, 0.01), (45, -0.031), (46, -0.062), (47, 0.063), (48, 0.01), (49, 0.036)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.88165909 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

Author: Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-Ichiro Taniguchi

Abstract: Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance ofsuch objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light?eld image as an input and model the distortion of the light ?eld caused by the refractive property of a transparent object. We propose a new feature, called the light ?eld distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

2 0.82966018 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

Author: Timothy Yau, Minglun Gong, Yee-Hong Yang

Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.

3 0.80537492 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

Author: Yu Ji, Jinwei Ye, Jingyi Yu

Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LFProbe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By . ude l. edu observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat’s Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.

4 0.78017795 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

5 0.7730481 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

Author: Weiming Li, Haitao Wang, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Xing Mei, Tao Hong, Hoyoung Lee, Jiyeun Kim

Abstract: Integral imaging display (IID) is a promising technology to provide realistic 3D image without glasses. To achieve a large screen IID with a reasonable fabrication cost, a potential solution is a tiled-lens-array IID (TLA-IID). However, TLA-IIDs are subject to 3D image artifacts when there are even slight misalignments between the lens arrays. This work aims at compensating these artifacts by calibrating the lens array poses with a camera and including them in a ray model used for rendering the 3D image. Since the lens arrays are transparent, this task is challenging for traditional calibration methods. In this paper, we propose a novel calibration method based on defining a set of principle observation rays that pass lens centers of the TLA and the camera ’s optical center. The method is able to determine the lens array poses with only one camera at an arbitrary unknown position without using any additional markers. The principle observation rays are automatically extracted using a structured light based method from a dense correspondence map between the displayed and captured . pixels. . com, Experiments show that lens array misalignments xme i nlpr . ia . ac . cn @ can be estimated with a standard deviation smaller than 0.4 pixels. Based on this, 3D image artifacts are shown to be effectively removed in a test TLA-IID with challenging misalignments.

6 0.74999911 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

7 0.71194166 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

8 0.69398099 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

9 0.63420421 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

10 0.59282488 409 cvpr-2013-Spectral Modeling and Relighting of Reflective-Fluorescent Scenes

11 0.56169254 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

12 0.51027453 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

13 0.50058389 330 cvpr-2013-Photometric Ambient Occlusion

14 0.4914287 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging

15 0.46635947 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas

16 0.45771387 454 cvpr-2013-Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction

17 0.43883768 344 cvpr-2013-Radial Distortion Self-Calibration

18 0.43272996 410 cvpr-2013-Specular Reflection Separation Using Dark Channel Prior

19 0.39786068 127 cvpr-2013-Discovering the Structure of a Planar Mirror System from Multiple Observations of a Single Point

20 0.37597016 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.109), (16, 0.125), (26, 0.025), (33, 0.225), (39, 0.011), (67, 0.029), (69, 0.036), (74, 0.223), (80, 0.01), (87, 0.073), (96, 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.79029495 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

Author: Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-Ichiro Taniguchi

Abstract: Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance ofsuch objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light?eld image as an input and model the distortion of the light ?eld caused by the refractive property of a transparent object. We propose a new feature, called the light ?eld distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

2 0.78985381 61 cvpr-2013-Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics

Author: Bo Zheng, Yibiao Zhao, Joey C. Yu, Katsushi Ikeuchi, Song-Chun Zhu

Abstract: In this paper, we present an approach for scene understanding by reasoning physical stability of objects from point cloud. We utilize a simple observation that, by human design, objects in static scenes should be stable with respect to gravity. This assumption is applicable to all scene categories and poses useful constraints for the plausible interpretations (parses) in scene understanding. Our method consists of two major steps: 1) geometric reasoning: recovering solid 3D volumetric primitives from defective point cloud; and 2) physical reasoning: grouping the unstable primitives to physically stable objects by optimizing the stability and the scene prior. We propose to use a novel disconnectivity graph (DG) to represent the energy landscape and use a Swendsen-Wang Cut (MCMC) method for optimization. In experiments, we demonstrate that the algorithm achieves substantially better performance for i) object segmentation, ii) 3D volumetric recovery of the scene, and iii) better parsing result for scene understanding in comparison to state-of-the-art methods in both public dataset and our own new dataset.

3 0.78656119 28 cvpr-2013-A Thousand Frames in Just a Few Words: Lingual Description of Videos through Latent Topics and Sparse Object Stitching

Author: Pradipto Das, Chenliang Xu, Richard F. Doell, Jason J. Corso

Abstract: The problem of describing images through natural language has gained importance in the computer vision community. Solutions to image description have either focused on a top-down approach of generating language through combinations of object detections and language models or bottom-up propagation of keyword tags from training images to test images through probabilistic or nearest neighbor techniques. In contrast, describing videos with natural language is a less studied problem. In this paper, we combine ideas from the bottom-up and top-down approaches to image description and propose a method for video description that captures the most relevant contents of a video in a natural language description. We propose a hybrid system consisting of a low level multimodal latent topic model for initial keyword annotation, a middle level of concept detectors and a high level module to produce final lingual descriptions. We compare the results of our system to human descriptions in both short and long forms on two datasets, and demonstrate that final system output has greater agreement with the human descriptions than any single level.

4 0.78483886 373 cvpr-2013-SWIGS: A Swift Guided Sampling Method

Author: Victor Fragoso, Matthew Turk

Abstract: We present SWIGS, a Swift and efficient Guided Sampling method for robust model estimation from image feature correspondences. Our method leverages the accuracy of our new confidence measure (MR-Rayleigh), which assigns a correctness-confidence to a putative correspondence in an online fashion. MR-Rayleigh is inspired by Meta-Recognition (MR), an algorithm that aims to predict when a classifier’s outcome is correct. We demonstrate that by using a Rayleigh distribution, the prediction accuracy of MR can be improved considerably. Our experiments show that MR-Rayleigh tends to predict better than the often-used Lowe ’s ratio, Brown’s ratio, and the standard MR under a range of imaging conditions. Furthermore, our homography estimation experiment demonstrates that SWIGS performs similarly or better than other guided sampling methods while requiring fewer iterations, leading to fast and accurate model estimates.

5 0.78342706 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

6 0.77809393 118 cvpr-2013-Detecting Pulse from Head Motions in Video

7 0.76871049 224 cvpr-2013-Information Consensus for Distributed Multi-target Tracking

8 0.76800466 271 cvpr-2013-Locally Aligned Feature Transforms across Views

9 0.76601154 410 cvpr-2013-Specular Reflection Separation Using Dark Channel Prior

10 0.76216078 138 cvpr-2013-Efficient 2D-to-3D Correspondence Filtering for Scalable 3D Object Recognition

11 0.75789672 363 cvpr-2013-Robust Multi-resolution Pedestrian Detection in Traffic Scenes

12 0.75756925 403 cvpr-2013-Sparse Output Coding for Large-Scale Visual Recognition

13 0.75641328 326 cvpr-2013-Patch Match Filter: Efficient Edge-Aware Filtering Meets Randomized Search for Fast Correspondence Field Estimation

14 0.75210273 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

15 0.74919158 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

16 0.74556798 361 cvpr-2013-Robust Feature Matching with Alternate Hough and Inverted Hough Transforms

17 0.74320352 454 cvpr-2013-Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction

18 0.74291873 115 cvpr-2013-Depth Super Resolution by Rigid Body Self-Similarity in 3D

19 0.74142796 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras

20 0.73871905 384 cvpr-2013-Segment-Tree Based Cost Aggregation for Stereo Matching