cvpr cvpr2013 cvpr2013-337 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Weiming Li, Haitao Wang, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Xing Mei, Tao Hong, Hoyoung Lee, Jiyeun Kim
Abstract: Integral imaging display (IID) is a promising technology to provide realistic 3D image without glasses. To achieve a large screen IID with a reasonable fabrication cost, a potential solution is a tiled-lens-array IID (TLA-IID). However, TLA-IIDs are subject to 3D image artifacts when there are even slight misalignments between the lens arrays. This work aims at compensating these artifacts by calibrating the lens array poses with a camera and including them in a ray model used for rendering the 3D image. Since the lens arrays are transparent, this task is challenging for traditional calibration methods. In this paper, we propose a novel calibration method based on defining a set of principle observation rays that pass lens centers of the TLA and the camera ’s optical center. The method is able to determine the lens array poses with only one camera at an arbitrary unknown position without using any additional markers. The principle observation rays are automatically extracted using a structured light based method from a dense correspondence map between the displayed and captured . pixels. . com, Experiments show that lens array misalignments xme i nlpr . ia . ac . cn @ can be estimated with a standard deviation smaller than 0.4 pixels. Based on this, 3D image artifacts are shown to be effectively removed in a test TLA-IID with challenging misalignments.
Reference: text
sentIndex sentText sentNum sentScore
1 However, TLA-IIDs are subject to 3D image artifacts when there are even slight misalignments between the lens arrays. [sent-10, score-0.737]
2 This work aims at compensating these artifacts by calibrating the lens array poses with a camera and including them in a ray model used for rendering the 3D image. [sent-11, score-1.103]
3 Since the lens arrays are transparent, this task is challenging for traditional calibration methods. [sent-12, score-0.952]
4 In this paper, we propose a novel calibration method based on defining a set of principle observation rays that pass lens centers of the TLA and the camera ’s optical center. [sent-13, score-1.042]
5 The method is able to determine the lens array poses with only one camera at an arbitrary unknown position without using any additional markers. [sent-14, score-0.908]
6 The principle observation rays are automatically extracted using a structured light based method from a dense correspondence map between the displayed and captured . [sent-15, score-0.3]
7 com, Experiments show that lens array misalignments xme i nlpr . [sent-18, score-0.877]
8 The proposed calibration method aims at eliminating 3D image artifacts for a tiled-lens array (TLA) integral imaging display (IID). [sent-27, score-0.619]
9 Uncalibrated virtual lenses mismatch the actual ones as shown in (c), which leads to 3D image artifacts as seen in (d). [sent-30, score-0.284]
10 Basically, a typical IID consists of a high resolution LCD panel and a lens array as shown in figure 1(a). [sent-34, score-0.803]
11 A 2D image named as integral image is displayed on the LCD and it is refracted by the lens array to different 1 1 10 0 01 1 179 7 directions to form images in 3D space. [sent-35, score-0.899]
12 The integral image is created by simulating 3D objects and light rays through a virtual lens array (a computational model) from a viewing zone. [sent-36, score-1.24]
13 When the virtual lens array is consistent with the physical lens array, correct 3D image can be seen within the viewing zone. [sent-37, score-1.586]
14 Though the size and resolution of nowadays commercialized LCDs have been ever increasing, building a large lens array with high precision at reasonable cost is still a challenging task for today’s optics fabrication industry. [sent-39, score-0.867]
15 To work around this, one alternative is to build a tiled lens array by mosaic of smaller lens arrays as shown in figure 1(b), which we refer to as a tiled-lens-array IID (TLA-IID). [sent-40, score-1.617]
16 In a TLA-IID, each lens array is small and thus is easier to fabricate with precision at low cost. [sent-41, score-0.809]
17 However, difficulties arise to align these lens arrays due to the precision limit ofmounting tools and errors induced by mechanics and temperature variations during usage. [sent-42, score-0.8]
18 With the presence of misalignments, virtual lenses without calibration are not consistent with the physical ones (see figure 1(c)), which would result in 3D image artifacts across different lens arrays as shown in figure 1(d). [sent-43, score-1.236]
19 In order to compensate these errors and provide a consistent 3D image across all the lens arrays from all the viewpoints in the viewing zone, calibration is necessary. [sent-44, score-0.995]
20 Here the calibration aims at obtaining the actual lens array poses with precision to create a correct (calibrated) virtual lens array. [sent-45, score-1.815]
21 Then by using these calibrated virtual lenses (see figure 1(e)) for rendering 3D image, the 3D image artifacts can be removed as shown in figure 1(f). [sent-46, score-0.352]
22 The system was calibrated with a stereo camera pair, where each camera deals with one of the stereo views. [sent-50, score-0.259]
23 As for our system, an extension of this method would require an expensive camera array to handle the tens of views in both horizontal and vertical directions. [sent-51, score-0.323]
24 In [6], the authors presented a multi-viewer tiled automultiscopic display which consists of an array of lenticular based multi-view displays. [sent-52, score-0.326]
25 A camera based method was proposed in [13] for an IID with a lens array. [sent-55, score-0.709]
26 To the best of our knowledge, our effort is among the first camera based methods to calibrate an IID with tiled lens arrays. [sent-58, score-0.787]
27 Different from these, we mainly focuses on correcting 3D image conveyed through lens arrays with IID methods. [sent-61, score-0.765]
28 For the above issues, contributions of this work are: (1) The geometry of light rays captured by a camera through the lens array refraction is formulized. [sent-67, score-1.102]
29 Particularly, we define a set of principle observations rays (PORs) that pass the lens centers and the camera’s optical center. [sent-68, score-0.75]
30 Based on the PORs, the lens array pose with respect to the LCD can be explicitly computed with images taken at only one camera pose without any additional calibration markers, which leads to an efficient calibration procedure. [sent-69, score-1.293]
31 (3) With the calibrated virtual lens arrays, an efficient method is presented to create a ray model that relates a correct light ray with each integral image pixel. [sent-74, score-1.303]
32 The ray model allows creating integral images which show correct 3D images despite of the TLA-IID misalignment errors. [sent-75, score-0.271]
33 The calibration method does not need any markers to be attached to the TLA-IID and it can be performed under natural light with only one camera. [sent-76, score-0.314]
34 The TLA integral imaging display system An experimental TLA-IID was setup, for which the proposed calibration method was applied and tested. [sent-84, score-0.4]
35 The TLAIID consists of an LCD panel and four lens arrays (figure 2). [sent-85, score-0.794]
36 Each of the four lens arrays is a hexagon lens array, where the lens pitch in horizontal direction is 2. [sent-88, score-2.057]
37 59 mm and the lens pitch in vertical direction is 1. [sent-89, score-0.72]
38 The focal length of the lens array is 7 mm and the thickness is the same. [sent-91, score-0.84]
39 In practical TLA-IIDs, there are usually some misalignments between the lens arrays. [sent-95, score-0.688]
40 For providing a challenging test in our experiments, two screws are intentionally inserted as spacers to enlarge rotational and translational misalignments among the lens arrays as shown in figure 2(b) and 2(d). [sent-96, score-0.899]
41 Definitions and assumptions The lens arrays in the TLA are computationally modeled by a set of virtual lens arrays (VLAs). [sent-101, score-1.673]
42 Specifically, this work assumes that: (1) Each virtual lens in the VLAs is a thin lens with a unique optical center. [sent-104, score-1.384]
43 (2) All the virtual lens centers are on a plane parallel to the LCD plane at a distance that equals to the lens focal length. [sent-105, score-1.486]
44 (3) The parameters of each VLA are known as provided by its design, which include lens shape, lens arrangement, lens pitch, focal length, and lens array size. [sent-106, score-2.62]
45 The four lens arrays constitute a region of interest (ROI) as shown in (a). [sent-111, score-0.765]
46 The light ray geometry when a camera observes the LCD panel through a lens array is shown in figure 4. [sent-120, score-1.167]
47 Since the camera is set at a distance to the lens array and each lens is small, the light rays captured in the camera’s image related to a single lens span a tiny angle and can be considered to be parallel. [sent-121, score-2.284]
48 It can be seen that among all the light rays captured by the camera from a specific lens, only the light ray through the lens’s optical center does not change its propagation direction. [sent-124, score-0.6]
49 Illustration of the imaging geometry when the LCD is seen by a camera through the lens array. [sent-126, score-0.766]
50 Illustration of the principal observation ray (POR) model to estimate virtual lens array (VLA) pose. [sent-129, score-1.044]
51 camera’s optical center and a lens’s optical center as a principle observation ray (POR). [sent-130, score-0.28]
52 Each lens in the lens array provides a related POR. [sent-131, score-1.378]
53 The set ofPORs are consistent with the geometry when the pinhole camera is used to observe the LCD plane without the lens array. [sent-132, score-0.765]
54 As shown in figure 4, the pixel related to a POR in the camera’s image can be extracted as the center of a lens in the image. [sent-135, score-0.666]
55 By decoding the captured structured light images, a dense correspondence map between the LCD pixels and camera pixels can be obtained. [sent-139, score-0.364]
56 Then the lens boundaries are extracted by detecting the drastic variations in the correspondence map. [sent-140, score-0.642]
57 Consider the center of the ith lens in the kth lens array. [sent-150, score-1.24]
58 + t(k) (1) where a(k) is the rotational angle of the kth lens array in t(xk), ty(k)] the VLA plane and t(k)=[ is translation of the kth lens array. [sent-159, score-1.43]
59 Here = g , where g is the distance between the lens center to the LCD. [sent-160, score-0.636]
60 Figure 5 shows the example of a POR that passes a lens optical center C(i) . [sent-163, score-0.669]
61 The calibration method The calibration method needs no additional devices and requires only one camera to be placed at an unknown position towards the TLA-IID. [sent-186, score-0.503]
62 The calibration method can be summarized as the following steps: (a) Display a set of structured light images on the TLAIID and capture each image with the camera at a fixed pose. [sent-187, score-0.439]
63 (b) Decode the structured light images to obtain a dense correspondence map between the LCD pixels and the camera pixels (an LCD-camera map). [sent-188, score-0.32]
64 Based on these, detect the number of lens arrays and extract the set of POR pixels for each lens array with image processing. [sent-190, score-1.564]
65 For a number of NA lens arrays, the calibration result is a set of parameters a(k), where k = 1,2,. [sent-194, score-0.791]
66 Create ray model with a calibrated TLA-IID With integral imaging technology, 3D image of a scene model is formed by displaying an integral image on the LCD. [sent-199, score-0.415]
67 The integral image is a 2D image created by simulating light ray intersections with the scene surfaces according to a ray model. [sent-200, score-0.464]
68 Specifically, the ray model relates each pixel in the integral image to a light ray in 3D space. [sent-201, score-0.494]
69 The key issue for creating the ray model is to assign each pixel in the integral image to a proper virtual lens in the VLAs according to the viewing geometry. [sent-204, score-1.069]
70 When this is determined, the light ray related to the pixel can be obtained simply as a straight line passing the pixel and the optical center of that virtual lens. [sent-205, score-0.501]
71 Based on the calibrated VLA poses, since the lens array structure and shape parameters are known, the positions of all the virtual lens centers in OwXwYwZw can be computed explicitly. [sent-208, score-1.59]
72 Assume that P is related with the jth virtual lens, then the virtual lens center can be represented as [S(m, n) , T(m, n) , g]T, where S(m, n) = s(j) , T(m, n) = t(j) . [sent-213, score-0.922]
73 g proainlt ism [Uag(em p,inxe),lV P (m ca,n b)e,0 d]eT- Figure 6 shows viewing geometry of the TLA-IID, ac- cording to which the integral image pixels are mapped to the virtual lens centers. [sent-215, score-0.945]
74 A method to perform this mapping is proposed and summarized as the following steps: (1) Based on the calibration result, compute virtual lens centers in all the VLAs and store them in the set H. [sent-216, score-0.954]
75 ng)rIna=ilt ma0l iazgne dtphTiex( mtlw,bonep)riex=perl 0-tsoef-noltre nadslbm[ymait,rsnixc]eoTs-, where S(m, n) and T(m, n) are the horizontal and vertical coordinates of the virtual lens center related with the pixel [m, n]T respectively. [sent-220, score-0.857]
76 (4) For each virtual lens center H(j) in H, do the following. [sent-221, score-0.779]
77 2p(D + g)/(Ds), where p is the lens pitch size, s is the LCD pixel pitch size, and 1. [sent-224, score-0.754]
78 Illustration of the method for creating ray model based on the calibrated virtual lens arrays. [sent-234, score-0.941]
79 This work uses the above ray model to render integral images by intersecting light rays with virtual 3D objects and assign the surface colors at the intersection points to the related integral image pixels. [sent-238, score-0.654]
80 Figure 7(b) shows an obtained LCD-camera map for the LCD’s X-direction, where the color coded value related with a certain camera pixel is the X coordinate of the LCD pixel seen through the lens array. [sent-249, score-0.789]
81 Figure 7(d) shows a local region where the lens structure can be seen. [sent-250, score-0.604]
82 The structure appears because all the rays seen through a lens come from a same LCD pixel as explained in section 3. [sent-251, score-0.704]
83 Based on the LCD-camera map, lens boundary extraction is performed by detecting pixels where the LCDcamera map varies drastically. [sent-253, score-0.629]
84 Figure 8(b) shows a local region near the screen center, where the lens array structure and misalignments can be well seen. [sent-262, score-0.9]
85 Since the VLA geometry is known, a homography transform of a VLA can be found by image feature fitting, which provides the POR pixels as the lens image centers. [sent-263, score-0.681]
86 Environment lights are turned off to make the transparent lens arrays as visible as possible. [sent-270, score-0.8]
87 Precision evaluation of the calibration result Precision of the calibration method is evaluated with a cross verification approach. [sent-276, score-0.374]
88 Therefore, the deviation of the calibration results in different tests reflects the calibration precision. [sent-286, score-0.396]
89 In our test, the camera is placed to 16 different poses as shown in figure 10(a), at each of which the calibration procedure is performed. [sent-287, score-0.345]
90 Table 1 lists the calibration results for the four VLAs at the 16 camera poses. [sent-291, score-0.292]
91 Here we also test the calibration by adding pixel position offsets to the POR pixels along a selected direction. [sent-298, score-0.262]
92 This test shows that using POR pixels to estimate the VLA geometry plays an important role to the precision of calibration results. [sent-309, score-0.273]
93 Effects to remove 3D image artifacts Finally, the calibration precision is examined by its effects for removing 3D artifacts. [sent-312, score-0.271]
94 Figure 12(b) shows a 3D image without calibration, for which the integral image is created without considering the lens misalignment. [sent-318, score-0.708]
95 It can be seen that the misalignments between the lens arrays still exist. [sent-322, score-0.849]
96 In practice, there are some cases where warps appear in lens arrays when their fabrication quality is limited. [sent-346, score-0.803]
97 These make the lens arrays not perfectly parallel to the LCD and introduce errors. [sent-347, score-0.765]
98 Conclusions This work shows that 3D image artifacts in a TLA-IID due to lens array misalignments can be effectively removed by calibration. [sent-350, score-0.907]
99 By properly formulating the POR geometry and using structured light based methods, the proposed calibration can be performed with only one camera without any additional markers in a highly automatic manner. [sent-351, score-0.486]
100 Acknowledgements We would like to thank Nobuji Sakai and TomoyukiKikuchi at Samsung Yokohama Research Institute (SYRI) for providing the lens arrays of this work. [sent-354, score-0.765]
wordName wordTfidf (topN-words)
[('lens', 0.604), ('lcd', 0.403), ('vla', 0.315), ('calibration', 0.187), ('por', 0.176), ('array', 0.17), ('arrays', 0.161), ('iid', 0.146), ('virtual', 0.143), ('vlas', 0.139), ('ray', 0.127), ('light', 0.106), ('camera', 0.105), ('integral', 0.104), ('owxwywzw', 0.101), ('lenses', 0.092), ('misalignments', 0.084), ('pors', 0.083), ('tiled', 0.078), ('display', 0.078), ('rays', 0.07), ('pitch', 0.06), ('artifacts', 0.049), ('calibrated', 0.049), ('subfigures', 0.045), ('viewing', 0.043), ('screen', 0.042), ('tla', 0.041), ('structured', 0.041), ('fabrication', 0.038), ('sandin', 0.038), ('varrier', 0.038), ('precision', 0.035), ('transparent', 0.035), ('focal', 0.034), ('autostereoscopic', 0.034), ('optical', 0.033), ('center', 0.032), ('mm', 0.032), ('samsung', 0.031), ('imaging', 0.031), ('plane', 0.03), ('pixel', 0.03), ('rings', 0.029), ('poses', 0.029), ('panel', 0.029), ('geometry', 0.026), ('homography', 0.026), ('corrected', 0.025), ('oc', 0.025), ('pixels', 0.025), ('margolis', 0.025), ('mingcai', 0.025), ('multiprojector', 0.025), ('oaxayaza', 0.025), ('oozz', 0.025), ('oyx', 0.025), ('peterka', 0.025), ('sait', 0.025), ('sajadi', 0.025), ('screws', 0.025), ('spacers', 0.025), ('tlaiid', 0.025), ('cc', 0.025), ('hardware', 0.025), ('horizontal', 0.024), ('vertical', 0.024), ('placed', 0.024), ('principle', 0.023), ('reality', 0.023), ('ty', 0.023), ('decoding', 0.023), ('cap', 0.023), ('roi', 0.023), ('tortoise', 0.022), ('nme', 0.022), ('displays', 0.022), ('translation', 0.022), ('deviation', 0.022), ('correct', 0.022), ('captured', 0.021), ('create', 0.021), ('markers', 0.021), ('displayed', 0.021), ('equals', 0.021), ('centers', 0.02), ('offsets', 0.02), ('bottle', 0.02), ('optics', 0.02), ('coordinate', 0.02), ('boundaries', 0.02), ('pose', 0.02), ('immersive', 0.02), ('rendering', 0.019), ('nlpr', 0.019), ('front', 0.018), ('creating', 0.018), ('weiming', 0.018), ('cz', 0.018), ('correspondence', 0.018)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999982 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display
Author: Weiming Li, Haitao Wang, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Xing Mei, Tao Hong, Hoyoung Lee, Jiyeun Kim
Abstract: Integral imaging display (IID) is a promising technology to provide realistic 3D image without glasses. To achieve a large screen IID with a reasonable fabrication cost, a potential solution is a tiled-lens-array IID (TLA-IID). However, TLA-IIDs are subject to 3D image artifacts when there are even slight misalignments between the lens arrays. This work aims at compensating these artifacts by calibrating the lens array poses with a camera and including them in a ray model used for rendering the 3D image. Since the lens arrays are transparent, this task is challenging for traditional calibration methods. In this paper, we propose a novel calibration method based on defining a set of principle observation rays that pass lens centers of the TLA and the camera ’s optical center. The method is able to determine the lens array poses with only one camera at an arbitrary unknown position without using any additional markers. The principle observation rays are automatically extracted using a structured light based method from a dense correspondence map between the displayed and captured . pixels. . com, Experiments show that lens array misalignments xme i nlpr . ia . ac . cn @ can be estimated with a standard deviation smaller than 0.4 pixels. Based on this, 3D image artifacts are shown to be effectively removed in a test TLA-IID with challenging misalignments.
2 0.25003913 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras
Author: Donald G. Dansereau, Oscar Pizarro, Stefan B. Williams
Abstract: Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedurefor lenselet-basedplenoptic cameras appropriatefor a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.
3 0.23521942 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?
Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello
Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.
4 0.13638103 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields
Author: Sven Wanner, Christoph Straehle, Bastian Goldluecke
Abstract: Wepresent thefirst variationalframeworkfor multi-label segmentation on the ray space of 4D light fields. For traditional segmentation of single images, , features need to be extractedfrom the 2Dprojection ofa three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light fields available online with complete depth and segmentation ground truth data where available, and thus establish the first benchmark data set for light field analysis to facilitate competitive further development of algorithms.
5 0.1349574 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures
Author: Yuichi Takeda, Shinsaku Hiura, Kosuke Sato
Abstract: In this paper we propose a novel depth measurement method by fusing depth from defocus (DFD) and stereo. One of the problems of passive stereo method is the difficulty of finding correct correspondence between images when an object has a repetitive pattern or edges parallel to the epipolar line. On the other hand, the accuracy of DFD method is inherently limited by the effective diameter of the lens. Therefore, we propose the fusion of stereo method and DFD by giving different focus distances for left and right cameras of a stereo camera with coded apertures. Two types of depth cues, defocus and disparity, are naturally integrated by the magnification and phase shift of a single point spread function (PSF) per camera. In this paper we give the proof of the proportional relationship between the diameter of defocus and disparity which makes the calibration easy. We also show the outstanding performance of our method which has both advantages of two depth cues through simulation and actual experiments.
6 0.12031008 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation
7 0.11640663 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation
8 0.11418808 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields
9 0.10612256 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path
10 0.10526274 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems
11 0.096466124 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas
12 0.088603683 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation
13 0.084489182 368 cvpr-2013-Rolling Shutter Camera Calibration
14 0.074463792 260 cvpr-2013-Learning and Calibrating Per-Location Classifiers for Visual Place Recognition
16 0.070761822 344 cvpr-2013-Radial Distortion Self-Calibration
17 0.065565325 84 cvpr-2013-Cloud Motion as a Calibration Cue
18 0.065038137 423 cvpr-2013-Template-Based Isometric Deformable 3D Reconstruction with Sampling-Based Focal Length Self-Calibration
19 0.060451653 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition
20 0.059376098 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging
topicId topicWeight
[(0, 0.097), (1, 0.131), (2, 0.004), (3, 0.02), (4, -0.021), (5, -0.051), (6, -0.052), (7, 0.001), (8, 0.049), (9, 0.037), (10, -0.067), (11, 0.075), (12, 0.158), (13, -0.06), (14, -0.224), (15, 0.039), (16, 0.075), (17, 0.054), (18, -0.026), (19, 0.074), (20, 0.081), (21, 0.014), (22, -0.042), (23, -0.069), (24, -0.013), (25, 0.03), (26, 0.052), (27, 0.02), (28, 0.005), (29, 0.028), (30, 0.041), (31, -0.022), (32, 0.028), (33, 0.033), (34, -0.001), (35, 0.001), (36, -0.03), (37, -0.041), (38, -0.027), (39, 0.025), (40, -0.027), (41, -0.058), (42, 0.001), (43, -0.013), (44, 0.042), (45, -0.139), (46, -0.027), (47, -0.012), (48, 0.009), (49, 0.008)]
simIndex simValue paperId paperTitle
same-paper 1 0.96276462 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display
Author: Weiming Li, Haitao Wang, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Xing Mei, Tao Hong, Hoyoung Lee, Jiyeun Kim
Abstract: Integral imaging display (IID) is a promising technology to provide realistic 3D image without glasses. To achieve a large screen IID with a reasonable fabrication cost, a potential solution is a tiled-lens-array IID (TLA-IID). However, TLA-IIDs are subject to 3D image artifacts when there are even slight misalignments between the lens arrays. This work aims at compensating these artifacts by calibrating the lens array poses with a camera and including them in a ray model used for rendering the 3D image. Since the lens arrays are transparent, this task is challenging for traditional calibration methods. In this paper, we propose a novel calibration method based on defining a set of principle observation rays that pass lens centers of the TLA and the camera ’s optical center. The method is able to determine the lens array poses with only one camera at an arbitrary unknown position without using any additional markers. The principle observation rays are automatically extracted using a structured light based method from a dense correspondence map between the displayed and captured . pixels. . com, Experiments show that lens array misalignments xme i nlpr . ia . ac . cn @ can be estimated with a standard deviation smaller than 0.4 pixels. Based on this, 3D image artifacts are shown to be effectively removed in a test TLA-IID with challenging misalignments.
2 0.93089014 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras
Author: Donald G. Dansereau, Oscar Pizarro, Stefan B. Williams
Abstract: Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedurefor lenselet-basedplenoptic cameras appropriatefor a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.
3 0.84716755 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?
Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello
Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.
4 0.83444917 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation
Author: Timothy Yau, Minglun Gong, Yee-Hong Yang
Abstract: In underwater imagery, the image formation process includes refractions that occur when light passes from water into the camera housing, typically through a flat glass port. We extend the existing work on physical refraction models by considering the dispersion of light, and derive new constraints on the model parameters for use in calibration. This leads to a novel calibration method that achieves improved accuracy compared to existing work. We describe how to construct a novel calibration device for our method and evaluate the accuracy of the method through synthetic and real experiments.
5 0.76679075 344 cvpr-2013-Radial Distortion Self-Calibration
Author: José Henrique Brito, Roland Angst, Kevin Köser, Marc Pollefeys
Abstract: In cameras with radial distortion, straight lines in space are in general mapped to curves in the image. Although epipolar geometry also gets distorted, there is a set of special epipolar lines that remain straight, namely those that go through the distortion center. By finding these straight epipolar lines in camera pairs we can obtain constraints on the distortion center(s) without any calibration object or plumbline assumptions in the scene. Although this holds for all radial distortion models we conceptually prove this idea using the division distortion model and the radial fundamental matrix which allow for a very simple closed form solution of the distortion center from two views (same distortion) or three views (different distortions). The non-iterative nature of our approach makes it immune to local minima and allows finding the distortion center also for cropped images or those where no good prior exists. Besides this, we give comprehensive relations between different undistortion models and discuss advantages and drawbacks.
6 0.73855847 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging
7 0.70992488 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields
8 0.70844138 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition
9 0.69874096 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation
10 0.66083747 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields
11 0.6397146 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation
12 0.63526481 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems
13 0.60923785 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas
14 0.59384215 368 cvpr-2013-Rolling Shutter Camera Calibration
15 0.53460884 84 cvpr-2013-Cloud Motion as a Calibration Cue
16 0.48675632 176 cvpr-2013-Five Shades of Grey for Fast and Reliable Camera Pose Estimation
17 0.46955028 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video
19 0.43140504 409 cvpr-2013-Spectral Modeling and Relighting of Reflective-Fluorescent Scenes
20 0.40773961 428 cvpr-2013-The Episolar Constraint: Monocular Shape from Shadow Correspondence
topicId topicWeight
[(10, 0.095), (16, 0.047), (26, 0.03), (33, 0.191), (67, 0.032), (69, 0.024), (87, 0.471)]
simIndex simValue paperId paperTitle
1 0.88277495 274 cvpr-2013-Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization
Author: Marcus A. Brubaker, Andreas Geiger, Raquel Urtasun
Abstract: In this paper we propose an affordable solution to selflocalization, which utilizes visual odometry and road maps as the only inputs. To this end, we present a probabilistic model as well as an efficient approximate inference algorithm, which is able to utilize distributed computation to meet the real-time requirements of autonomous systems. Because of the probabilistic nature of the model we are able to cope with uncertainty due to noisy visual odometry and inherent ambiguities in the map (e.g., in a Manhattan world). By exploiting freely available, community developed maps and visual odometry measurements, we are able to localize a vehicle up to 3m after only a few seconds of driving on maps which contain more than 2,150km of drivable roads.
2 0.87348878 230 cvpr-2013-Joint 3D Scene Reconstruction and Class Segmentation
Author: Christian Häne, Christopher Zach, Andrea Cohen, Roland Angst, Marc Pollefeys
Abstract: Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being ’too noisy’. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other’s task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.
3 0.86864865 354 cvpr-2013-Relative Volume Constraints for Single View 3D Reconstruction
Author: Eno Töppe, Claudia Nieuwenhuis, Daniel Cremers
Abstract: We introduce the concept of relative volume constraints in order to account for insufficient information in the reconstruction of 3D objects from a single image. The key idea is to formulate a variational reconstruction approach with shape priors in form of relative depth profiles or volume ratios relating object parts. Such shape priors can easily be derived either from a user sketch or from the object’s shading profile in the image. They can handle textured or shadowed object regions by propagating information. We propose a convex relaxation of the constrained optimization problem which can be solved optimally in a few seconds on graphics hardware. In contrast to existing single view reconstruction algorithms, the proposed algorithm provides substantially more flexibility to recover shape details such as self-occlusions, dents and holes, which are not visible in the object silhouette.
4 0.83840561 209 cvpr-2013-Hypergraphs for Joint Multi-view Reconstruction and Multi-object Tracking
Author: Martin Hofmann, Daniel Wolf, Gerhard Rigoll
Abstract: We generalize the network flow formulation for multiobject tracking to multi-camera setups. In the past, reconstruction of multi-camera data was done as a separate extension. In this work, we present a combined maximum a posteriori (MAP) formulation, which jointly models multicamera reconstruction as well as global temporal data association. A flow graph is constructed, which tracks objects in 3D world space. The multi-camera reconstruction can be efficiently incorporated as additional constraints on the flow graph without making the graph unnecessarily large. The final graph is efficiently solved using binary linear programming. On the PETS 2009 dataset we achieve results that significantly exceed the current state of the art.
5 0.82238412 125 cvpr-2013-Dictionary Learning from Ambiguously Labeled Data
Author: Yi-Chen Chen, Vishal M. Patel, Jaishanker K. Pillai, Rama Chellappa, P. Jonathon Phillips
Abstract: We propose a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label. The dictionary learning problem is solved using an iterative alternating algorithm. At each iteration of the algorithm, two alternating steps are performed: a confidence update and a dictionary update. The confidence of each sample is defined as the probability distribution on its ambiguous labels. The dictionaries are updated using either soft (EM-based) or hard decision rules. Extensive evaluations on existing datasets demonstrate that the proposed method performs significantly better than state-of-the-art ambiguously labeled learning approaches.
same-paper 6 0.81669033 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display
7 0.80936843 39 cvpr-2013-Alternating Decision Forests
8 0.80354536 107 cvpr-2013-Deformable Spatial Pyramid Matching for Fast Dense Correspondences
10 0.73951536 396 cvpr-2013-Simultaneous Active Learning of Classifiers & Attributes via Relative Feedback
11 0.7311728 298 cvpr-2013-Multi-scale Curve Detection on Surfaces
12 0.6891095 71 cvpr-2013-Boundary Cues for 3D Object Shape Recovery
13 0.6813516 155 cvpr-2013-Exploiting the Power of Stereo Confidences
14 0.66684735 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities
15 0.6600011 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging
16 0.65810943 147 cvpr-2013-Ensemble Learning for Confidence Measures in Stereo Vision
17 0.65665221 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances
18 0.6517477 467 cvpr-2013-Wide-Baseline Hair Capture Using Strand-Based Refinement
19 0.65075403 373 cvpr-2013-SWIGS: A Swift Guided Sampling Method
20 0.64867532 289 cvpr-2013-Monocular Template-Based 3D Reconstruction of Extensible Surfaces with Local Linear Elasticity