iccv iccv2013 iccv2013-407 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Nicolas Martin, Vincent Couture, Sébastien Roy
Abstract: We present a scanning method that recovers dense subpixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. Subpixel accuracy is achieved by considering several zero-crossings defined by the difference between pairs of unstructured patterns. We use gray-level band-pass white noise patterns that increase robustness to indirect lighting and scene discontinuities. Simulated and experimental results show that our method recovers scene geometry with high subpixel precision, and that it can handle many challenges of active reconstruction systems. We compare our results to state of the art methods such as micro phase shifting and modulated phase shifting.
Reference: text
sentIndex sentText sentNum sentScore
1 ca ro Abstract We present a scanning method that recovers dense subpixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. [sent-7, score-0.846]
2 We use gray-level band-pass white noise patterns that increase robustness to indirect lighting and scene discontinuities. [sent-9, score-0.55]
3 Simulated and experimental results show that our method recovers scene geometry with high subpixel precision, and that it can handle many challenges of active reconstruction systems. [sent-10, score-0.804]
4 We compare our results to state of the art methods such as micro phase shifting and modulated phase shifting. [sent-11, score-0.57]
5 Introduction Active scanning approaches using a camera and a projector have gained popularity in various 3D scene reconstruction systems [15, 14]. [sent-13, score-0.593]
6 One or many known patterns are projected onto a scene, and a camera observes the defor- mation of these patterns to calculate surface information. [sent-14, score-0.631]
7 Camera-projector correspondence is achieved by identifying each projector pixel by a code defined by the projected patterns. [sent-15, score-0.529]
8 The resolution of a projector being finite, several methods attempt to recover subpixel correspondences, thus giving better reconstruction results. [sent-16, score-1.023]
9 In practice, it is often the case that a camera pixel observes a mixture of intensities from two or more projector pixels. [sent-17, score-0.523]
10 The main contribution of this paper is to present a method that recovers very high precision subpixel correspondence and is robust to indirect illumination. [sent-19, score-0.879]
11 Our method uses a sequence of gray level band-pass white noise patterns to encode each projector pixel uniquely[4]. [sent-20, score-0.73]
12 are called unstructured patterns because the codes do not represent projector pixel position directly and a search is required to find the best correspondence for each camera pixel [12, 5, 17, 4]. [sent-22, score-0.971]
13 This approach is robust to challenging difficulties in active systems such as indirect illumination and scene discontinuities. [sent-23, score-0.314]
14 Besides, it produces dense subpixel correspondence whereas the original method did not. [sent-25, score-0.662]
15 The key to achieving both subpixel correspondence and reducing the number of patterns is to increase the length of the code generated from the patterns. [sent-26, score-0.938]
16 Instead of using the signed differences between each pattern and a reference as in [4], we consider differences between all possible pairs of blurred gray level unstructured patterns. [sent-27, score-0.188]
17 The resulting codes are much longer than the number of patterns albeit with some redundancy. [sent-28, score-0.291]
18 Every sign change between neighboring projector pixels provides a zero-crossing which is used as a constraint to recover subpixel correspondence. [sent-29, score-0.971]
19 The method we propose uses two-dimensional patterns and is designed to avoid the need for geometric or photometric calibration of both the camera and the projector. [sent-32, score-0.359]
20 While our method could rely on epipolar geometry to allow using one-dimensional patterns, we argue that they create 11444411 more indirect lighting because of their low frequency in one direction [9]. [sent-33, score-0.4]
21 2, we summarize previous works in coded light systems, in particular to achieve subpixel precision. [sent-37, score-0.652]
22 4, we show how to recover subpixel correspondence on synthetic data. [sent-41, score-0.662]
23 Previous work The goal of this paper is to achieve a high precision subpixel reconstruction for static scenes in the presence of several challenges like indirect illumination, scene discontinuities or projector defocus (see [13] for a list of standard problems). [sent-47, score-1.292]
24 Many active reconstruction methods can work at subpixel precision levels (see [15, 14] for extensive reviews). [sent-48, score-0.703]
25 Several methods are based on the projection of sinusoidal patterns which encode the projection position by a unique phase [18, 19]. [sent-51, score-0.414]
26 The pattern must be shifted several times and several frequencies are often needed [11]. [sent-52, score-0.153]
27 A limited photometric calibration is required since the phase estimation is directly related to the intensities affected by the gamma of the projector. [sent-53, score-0.39]
28 Modulated phase shifting [3] was introduced to generate less indirect illumination and increase the accuracy of the subpixel correspondences. [sent-54, score-1.068]
29 The method modulates the highest frequency patterns with orthogonal high frequency sine waves. [sent-55, score-0.476]
30 The number of projected patterns needed is very high however since each pattern is itself modulated by several shifted patterns. [sent-56, score-0.478]
31 The method described in [6] can be used to reduce the required number of patterns by multiplexing the modulated patterns together. [sent-57, score-0.59]
32 Phase unwrapping involves lower frequency patterns that can introduce large errors [11], in particular because of indirect lighting [13]. [sent-59, score-0.69]
33 Recently, micro phase shifting was introduced in [10] to unwrap the recovered phases using only high frequency patterns. [sent-60, score-0.467]
34 Due to low frequencies in one direction, the projected patterns still produce some indirect illumination that can affect the results. [sent-61, score-0.567]
35 Another category of methods [12, 4] use so-called unstructured light patterns that form temporal codewords to tween pairs of images. [sent-62, score-0.391]
36 Two quadratic codes are shown for two adjacent pixels of the image pair (i, j). [sent-63, score-0.156]
37 The labels A and B illustrates the computation of a bit of W¨[x, y] and W¨[x + 1, y] as bit(ci [x, y] cj [x, y] ) and bit(ci [x + 1, y] cj [x + 1, y] ). [sent-64, score-0.184]
38 − − identify each projector pixel uniquely, but require an ex- plicit search to obtain correspondences. [sent-65, score-0.394]
39 In [4], the patterns were designed to make constant the amount of indirect illumination, and the method was shown to be very robust. [sent-66, score-0.426]
40 However, it did not yield subpixel accuracy reconstruction and required a lot of patterns. [sent-67, score-0.679]
41 From linear to quadratic code length In [4], a camera pixel recovered a bit from the observed intensity by looking at the sign of the difference with the mean intensity over all patterns. [sent-69, score-0.352]
42 For N patterns, a linear codeword of N bits is generated by comparing each captured pattern ci Nwi bthit sth ies average image co mforp aeraicngh pixel p = (x, y). [sent-71, score-0.24]
43 (3) This quadratic code is very unstable for binary patterns however, since half the intensity comparisons will yield differences of 0. [sent-81, score-0.36]
44 We next explain how to generate our patterns which alleviate this problem. [sent-82, score-0.242]
45 Blurred gray level pattern generation We propose to use band-pass gray level patterns which are generated as follows. [sent-85, score-0.376]
46 Similarly to [4], we first apply 11444422 a band-pass filter on white noise in the frequency domain, keeping only frequencies ranging from f to 2f where f is the same parameter as in [4]. [sent-86, score-0.216]
47 In the next section, we analyse the number of patterns required to match. [sent-92, score-0.242]
48 Number of required patterns Using these gray-level patterns, the quadratic code now contains more information for each pixel than its liWn- W¨ W˙ ear counterpart but also some redundancy. [sent-95, score-0.381]
49 As an example, l5y0 images will provide a quadratic code of length 1275 bits which effectively contains 214 bits of information. [sent-101, score-0.237]
50 A minimum of 24 patterns is needed to uniquely encode each pixel of a 800×600 projector. [sent-103, score-0.323]
51 The number of patterns is also expected to be lower when the epipolar geometry is known. [sent-108, score-0.32]
52 Note that, in our experiments, we chose to use more than the minimal number of patterns to remove the number of images as a source of errors and better assess the remaining reconstructions errors. [sent-109, score-0.33]
53 Achieving subpixel accuracy As is the case with [4], the non-subpixel correspondence of a camera pixel is found using the LSH algorithm[2] that finds a match between the pixels of the camera and the projector, identified by the quadratic codes and respectively (using Eq. [sent-111, score-0.941]
54 u Assuming a camera-projector pixel ratio near 1, the camera pixel will generally see a mixture of four neighboring projector pixels. [sent-113, score-0.493]
55 This mixture can be described by two pa- {W¨c} {W¨p} rameters (λˆx , ˆλy) where 0 ≤ λˆx , λˆy ≤ 1which represent the subpixel matching disparity betwee≤n camera pixel r ˆpe saenndt projector pixel p. [sent-114, score-1.093]
56 Consider that a camera pixel ˆp = ( xˆ, yˆ) has been matched to a projector pixel p = (x, y), using the LSH algorithm. [sent-115, score-0.493]
57 To estimate , we first need to find which quadrant represented by four projector pixels {(x, y) , (x (λˆx λˆy), + Figure 3. [sent-116, score-0.496]
58 The red and cyan points corresponds to intensities of pi and pj respectively, for a quadrant out of four. [sent-117, score-0.351]
59 Each pair (i, j) generates a 2D zero-crossing that provides constraints that are used to estimate the true subpixel position. [sent-119, score-0.625]
60 Selecting the right quadrant There are four quadrants each composed of three projector pixels located around the matched projector pixel. [sent-123, score-0.84]
61 The correct quadrant is selected as the pair (δˆx , ˆδy) for which the difference between the camera and projector codes is minimal : δˆx,δˆy= argδx,δym∈i{n−1,1}? [sent-124, score-0.592]
62 Estimating the subpixel position For a projector pattern pi, we model the interpolation of the intensities of the four neighboring projector pixels of a quadrant as a function of and using a bilinear plane: λx λy Ki(x, y, ˆδx, δˆy,λx, λy) = ? [sent-155, score-1.563]
63 ) The 2D intersection of the two bilinear planes defined by projector patterns pi and pj is obtained by solving 11444433 Ki(x, y, ˆδx, ˆδy, λx, λy) = Kj(x, y, ˆδx, ˆδy, λx, λy). [sent-163, score-0.802]
64 For each pair of patterns (pi, pj), the pair is discarded if the two planes do not intersect. [sent-168, score-0.329]
65 Otherwise, if bit(ci [ pˆ] cj [ pˆ]) = bit(pi [p] pj [p]), then the subpixel position − − should be located[ on −the p side of the curve towards p. [sent-169, score-0.794]
66 Thus, each pair (pi, pj) for which the planes intersect effectively provides a constraint on the value of the true subpixel location (λˆx , λˆy). [sent-171, score-0.662]
67 Hierarchical voting The true subpixel position , is the one satisfying the most constraints. [sent-177, score-0.628]
68 However, in practice, camera bits can have errors due to image noise, changes in surface albedo α and the gamma (λˆx ˆλy) tions of the blur in pixels and the Gaussian intensity noise level. [sent-189, score-0.419]
69 For our synthetic experiment, the estimated subpixel location (a) is only slightly affected by the gamma nonlinearity of the camera. [sent-191, score-0.731]
70 4 plots the RMS subpixel error for different standard deviations of the blur in pixels and noise level. [sent-200, score-0.699]
71 Synthetic subpixel positions were created by shifting 50 patterns of f = 64 cycles per frame and a 800 600 resolution. [sent-201, score-0.942]
72 Note that, for all tests, we did not observe that the actual subpixel position has any effect on the RMSE (data not shown). [sent-212, score-0.628]
73 Therobtwasfirtscanedalone atble( ft),hen a plastic board was added to the scene to create indirect lighting (right). [sent-215, score-0.257]
74 We compare our method to several other subpixel methods : the original phase shifting (PS) method of [19], modulated phase shifting (ModPS) presented in [3] and micro phase shifting (MicroPS) [10]. [sent-219, score-1.514]
75 In all our experiments, we used a Samsung SP-400B projector with a resolution of 800×600 pixels and a Prosilica jGecCt-o4r5 w0iCt camera uwtiitohn a rfe 8so0l0u×ti6on0 0of p i6x5e9l ×s 4an9d3. [sent-220, score-0.42]
76 In order to measure the sensibility of each reconstruction method to interreflection, we reconstructed the robot, with and without interreflections from a nearby plane (see Fig. [sent-231, score-0.169]
77 For fair comparison, every method used a budget of approximately 50 patterns to perform the scan. [sent-236, score-0.242]
78 In order to do so, PS used 8 frequencies from 1/8 to 1/1024, 3 shifts per frequency for each direction (horizontal and vertical) for a total of 48 projected patterns. [sent-237, score-0.204]
79 ModPS used 4 frequencies from 1/16 to 1/1024 : the highest frequency was modulated by 6 shifted versions of an orthogonal sinus wave of frequency 1/16 and each 1http : / /www . [sent-238, score-0.446]
80 Histogram of reconstruction variations for the robot scene featuring strong interreflections. [sent-246, score-0.187]
81 The unwrapping used the method of [11] and 9 patterns per direction, so ModPS used a total of 54 patterns. [sent-248, score-0.34]
82 We also added the results of UQ [4] which is not a subpixel reconstruction but provides a scale to appreciate how well all the subpixel algorithms perform. [sent-252, score-1.279]
83 PS-400 is the result of PS using 25 patterns per frequency (as opposed to 3 which is the minimum). [sent-253, score-0.345]
84 The top row shows the reconstruction using 50 patterns for UQS, and 50 patterns for MicroPS. [sent-272, score-0.563]
85 The bottom row shows the same reconstruction using 200 projected patterns for each method (for MicroPS, 86 patterns were used in each direction to estimate the phase). [sent-273, score-0.602]
86 The reconstructions are similar for both methods at 50 patterns, even though some errors can be spotted in the reconstruction of slanted surfaces by MicroPS. [sent-274, score-0.167]
87 Note that the MicroPS method uses 1D high frequency patterns to unwrap and compute the phase. [sent-278, score-0.389]
88 This is especially visible in the corner at the back of the scene, where two bumps are falsely reconstructed as a result of some indirect lighting bouncing of each wall, as seen in Fig. [sent-280, score-0.357]
89 Two bumps on each side of the corner are falsely reconstructed using MicroPS due to the indirect illumination generated by its 1D patterns. [sent-290, score-0.392]
90 using only 7 patterns as presented in [10], a median filter is applied to correct unwrapping errors and noisy phase estimates due to low signal to noise ratio. [sent-291, score-0.589]
91 However, when applied, the median filter does correct some errors (pixels on the edge of the ball for instance), but also removes the correspondences found on small objects like the screwdriver as shown in Fig. [sent-293, score-0.195]
92 MicroPS suffers from a trade-off between correspondence errors in discontinuities and the lack of correspondences on small objects. [sent-295, score-0.167]
93 Conclusion We proposed a method to produce highly accurate subpixel correspondence using a projector and a camera. [sent-298, score-1.006]
94 It relies on the principles of unstructured light scanning methods that are robust to common and challenging difficulties arising in active scanning systems. [sent-299, score-0.367]
95 We use continuous gray scale patterns produced in the frequency domain. [sent-300, score-0.388]
96 Each pair of im- ages contributes a bit in quadratic codes that increase the information used in the subpixel estimation but also decreases the number of patterns needed to match. [sent-302, score-1.077]
97 The number of patterns used was 50 (top row) and 200 (bottom row). [sent-305, score-0.242]
98 Reconstructions produced by our method were in general comparable to the ones produced by state of the art phase shifting methods, but showed increased robustness to indirect illumination and depth discontinuities. [sent-307, score-0.468]
99 A state of the art in structured light patterns for surface profilometry. [sent-412, score-0.32]
100 Correspondence maps of the screwdriver and hanger using (a) UQS (b) MicroPS without median filtering (c) MicroPS with 5x5 median filtering. [sent-445, score-0.168]
wordName wordTfidf (topN-words)
[('subpixel', 0.6), ('projector', 0.344), ('microps', 0.244), ('patterns', 0.242), ('indirect', 0.184), ('uqs', 0.155), ('phase', 0.144), ('quadrant', 0.125), ('modulated', 0.106), ('bit', 0.106), ('frequency', 0.103), ('shifting', 0.1), ('unwrapping', 0.098), ('pj', 0.097), ('unstructured', 0.097), ('modps', 0.089), ('gamma', 0.085), ('scanning', 0.083), ('pi', 0.082), ('reconstruction', 0.079), ('micro', 0.076), ('bits', 0.074), ('robot', 0.07), ('montr', 0.066), ('umont', 0.066), ('correspondence', 0.062), ('frequencies', 0.062), ('reconstructions', 0.06), ('quadratic', 0.055), ('light', 0.052), ('median', 0.05), ('pixel', 0.05), ('codes', 0.049), ('camera', 0.049), ('roy', 0.049), ('eal', 0.049), ('interreflections', 0.049), ('pattern', 0.048), ('epipolar', 0.048), ('intensities', 0.047), ('discontinuities', 0.047), ('affected', 0.046), ('blur', 0.045), ('screwdriver', 0.044), ('unwrap', 0.044), ('gray', 0.043), ('shifted', 0.043), ('ball', 0.043), ('reconstructed', 0.041), ('illumination', 0.04), ('ci', 0.04), ('cj', 0.039), ('photometric', 0.039), ('bumps', 0.039), ('couture', 0.039), ('christmas', 0.039), ('xpi', 0.039), ('projected', 0.039), ('sij', 0.039), ('universit', 0.039), ('scene', 0.038), ('ps', 0.037), ('planes', 0.037), ('subsurface', 0.036), ('lighting', 0.035), ('code', 0.034), ('falsely', 0.034), ('salvi', 0.034), ('ki', 0.034), ('recovers', 0.033), ('scattering', 0.033), ('observes', 0.033), ('voted', 0.031), ('uniquely', 0.031), ('correspondences', 0.03), ('jun', 0.03), ('side', 0.03), ('geometry', 0.03), ('wave', 0.029), ('albedo', 0.029), ('calibration', 0.029), ('intensity', 0.029), ('sine', 0.028), ('codeword', 0.028), ('errors', 0.028), ('position', 0.028), ('difficulties', 0.028), ('noise', 0.027), ('pixels', 0.027), ('jan', 0.027), ('kj', 0.027), ('surface', 0.026), ('rms', 0.026), ('gupta', 0.026), ('lsh', 0.025), ('pair', 0.025), ('active', 0.024), ('white', 0.024), ('filtering', 0.024), ('corner', 0.024)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999982 407 iccv-2013-Subpixel Scanning Invariant to Indirect Lighting Using Quadratic Code Length
Author: Nicolas Martin, Vincent Couture, Sébastien Roy
Abstract: We present a scanning method that recovers dense subpixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. Subpixel accuracy is achieved by considering several zero-crossings defined by the difference between pairs of unstructured patterns. We use gray-level band-pass white noise patterns that increase robustness to indirect lighting and scene discontinuities. Simulated and experimental results show that our method recovers scene geometry with high subpixel precision, and that it can handle many challenges of active reconstruction systems. We compare our results to state of the art methods such as micro phase shifting and modulated phase shifting.
2 0.23679315 82 iccv-2013-Compensating for Motion during Direct-Global Separation
Author: Supreeth Achar, Stephen T. Nuske, Srinivasa G. Narasimhan
Abstract: Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to beperformed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.
3 0.15953961 405 iccv-2013-Structured Light in Sunlight
Author: Mohit Gupta, Qi Yin, Shree K. Nayar
Abstract: Strong ambient illumination severely degrades the performance of structured light based techniques. This is especially true in outdoor scenarios, where the structured light sources have to compete with sunlight, whose power is often 2-5 orders of magnitude larger than the projected light. In this paper, we propose the concept of light-concentration to overcome strong ambient illumination. Our key observation is that given a fixed light (power) budget, it is always better to allocate it sequentially in several portions of the scene, as compared to spreading it over the entire scene at once. For a desired level of accuracy, we show that by distributing light appropriately, the proposed approach requires 1-2 orders lower acquisition time than existing approaches. Our approach is illumination-adaptive as the optimal light distribution is determined based on a measurement of the ambient illumination level. Since current light sources have a fixed light distribution, we have built a prototype light source that supports flexible light distribution by controlling the scanning speed of a laser scanner. We show several high quality 3D scanning results in a wide range of outdoor scenarios. The proposed approach will benefit 3D vision systems that need to operate outdoors under extreme ambient illumination levels on a limited time and power budget.
4 0.10508513 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects
Author: Michael Weinmann, Aljosa Osep, Roland Ruiters, Reinhard Klein
Abstract: In this paper, we present a novel, robust multi-view normal field integration technique for reconstructing the full 3D shape of mirroring objects. We employ a turntablebased setup with several cameras and displays. These are used to display illumination patterns which are reflected by the object surface. The pattern information observed in the cameras enables the calculation of individual volumetric normal fields for each combination of camera, display and turntable angle. As the pattern information might be blurred depending on the surface curvature or due to nonperfect mirroring surface characteristics, we locally adapt the decoding to the finest still resolvable pattern resolution. In complex real-world scenarios, the normal fields contain regions without observations due to occlusions and outliers due to interreflections and noise. Therefore, a robust reconstruction using only normal information is challenging. Via a non-parametric clustering of normal hypotheses derived for each point in the scene, we obtain both the most likely local surface normal and a local surface consistency estimate. This information is utilized in an iterative mincut based variational approach to reconstruct the surface geometry.
5 0.099101424 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects
Author: Benjamin Ummenhofer, Thomas Brox
Abstract: 3D reconstruction deals with the problem of finding the shape of an object from a set of images. Thin objects that have virtually no volumepose a special challengefor reconstruction with respect to shape representation and fusion of depth information. In this paper we present a dense pointbased reconstruction method that can deal with this special class of objects. We seek to jointly optimize a set of depth maps by treating each pixel as a point in space. Points are pulled towards a common surface by pairwise forces in an iterative scheme. The method also handles the problem of opposed surfaces by means of penalty forces. Efficient optimization is achieved by grouping points to superpixels and a spatial hashing approach for fast neighborhood queries. We show that the approach is on a par with state-of-the-art methods for standard multi view stereo settings and gives superior results for thin objects.
6 0.093790054 255 iccv-2013-Local Signal Equalization for Correspondence Matching
7 0.092974082 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction
8 0.085614584 199 iccv-2013-High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination
9 0.072038002 242 iccv-2013-Learning People Detectors for Tracking in Crowded Scenes
10 0.070545822 17 iccv-2013-A Global Linear Method for Camera Pose Registration
11 0.070453994 387 iccv-2013-Shape Anchors for Data-Driven Multi-view Reconstruction
12 0.0678716 30 iccv-2013-A Simple Model for Intrinsic Image Decomposition with Depth Cues
13 0.064255469 367 iccv-2013-SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels
14 0.063607149 219 iccv-2013-Internet Based Morphable Model
15 0.06274575 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera
16 0.061503969 266 iccv-2013-Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation
17 0.061027456 139 iccv-2013-Elastic Fragments for Dense Scene Reconstruction
18 0.060139973 433 iccv-2013-Understanding High-Level Semantics by Modeling Traffic Patterns
19 0.059864763 174 iccv-2013-Forward Motion Deblurring
20 0.059550699 397 iccv-2013-Space-Time Tradeoffs in Photo Sequencing
topicId topicWeight
[(0, 0.136), (1, -0.113), (2, -0.037), (3, 0.009), (4, -0.039), (5, 0.029), (6, 0.022), (7, -0.115), (8, -0.007), (9, -0.011), (10, -0.015), (11, -0.019), (12, 0.03), (13, -0.016), (14, -0.006), (15, -0.035), (16, -0.023), (17, 0.051), (18, 0.008), (19, -0.001), (20, -0.012), (21, -0.008), (22, -0.036), (23, -0.029), (24, -0.064), (25, 0.033), (26, 0.002), (27, -0.006), (28, 0.039), (29, -0.099), (30, 0.112), (31, 0.043), (32, 0.101), (33, -0.039), (34, 0.018), (35, 0.042), (36, 0.014), (37, -0.027), (38, -0.083), (39, 0.025), (40, 0.024), (41, 0.017), (42, 0.056), (43, 0.017), (44, -0.056), (45, -0.073), (46, -0.007), (47, 0.073), (48, -0.025), (49, 0.002)]
simIndex simValue paperId paperTitle
same-paper 1 0.93359822 407 iccv-2013-Subpixel Scanning Invariant to Indirect Lighting Using Quadratic Code Length
Author: Nicolas Martin, Vincent Couture, Sébastien Roy
Abstract: We present a scanning method that recovers dense subpixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. Subpixel accuracy is achieved by considering several zero-crossings defined by the difference between pairs of unstructured patterns. We use gray-level band-pass white noise patterns that increase robustness to indirect lighting and scene discontinuities. Simulated and experimental results show that our method recovers scene geometry with high subpixel precision, and that it can handle many challenges of active reconstruction systems. We compare our results to state of the art methods such as micro phase shifting and modulated phase shifting.
2 0.87065423 405 iccv-2013-Structured Light in Sunlight
Author: Mohit Gupta, Qi Yin, Shree K. Nayar
Abstract: Strong ambient illumination severely degrades the performance of structured light based techniques. This is especially true in outdoor scenarios, where the structured light sources have to compete with sunlight, whose power is often 2-5 orders of magnitude larger than the projected light. In this paper, we propose the concept of light-concentration to overcome strong ambient illumination. Our key observation is that given a fixed light (power) budget, it is always better to allocate it sequentially in several portions of the scene, as compared to spreading it over the entire scene at once. For a desired level of accuracy, we show that by distributing light appropriately, the proposed approach requires 1-2 orders lower acquisition time than existing approaches. Our approach is illumination-adaptive as the optimal light distribution is determined based on a measurement of the ambient illumination level. Since current light sources have a fixed light distribution, we have built a prototype light source that supports flexible light distribution by controlling the scanning speed of a laser scanner. We show several high quality 3D scanning results in a wide range of outdoor scenarios. The proposed approach will benefit 3D vision systems that need to operate outdoors under extreme ambient illumination levels on a limited time and power budget.
Author: Donghyeon Cho, Minhaeng Lee, Sunyeong Kim, Yu-Wing Tai
Abstract: Light-field imaging systems have got much attention recently as the next generation camera model. A light-field imaging system consists of three parts: data acquisition, manipulation, and application. Given an acquisition system, it is important to understand how a light-field camera converts from its raw image to its resulting refocused image. In this paper, using the Lytro camera as an example, we describe step-by-step procedures to calibrate a raw light-field image. In particular, we are interested in knowing the spatial and angular coordinates of the micro lens array and the resampling process for image reconstruction. Since Lytro uses a hexagonal arrangement of a micro lens image, additional treatments in calibration are required. After calibration, we analyze and compare the performances of several resampling methods for image reconstruction with and without calibration. Finally, a learning based interpolation method is proposed which demonstrates a higher quality image reconstruction than previous interpolation methods including a method used in Lytro software.
Author: Ying Fu, Antony Lam, Imari Sato, Takahiro Okabe, Yoichi Sato
Abstract: Hyperspectral imaging is beneficial to many applications but current methods do not consider fluorescent effects which are present in everyday items ranging from paper, to clothing, to even our food. Furthermore, everyday fluorescent items exhibit a mix of reflectance and fluorescence. So proper separation of these components is necessary for analyzing them. In this paper, we demonstrate efficient separation and recovery of reflective and fluorescent emission spectra through the use of high frequency illumination in the spectral domain. With the obtained fluorescent emission spectra from our high frequency illuminants, we then present to our knowledge, the first method for estimating the fluorescent absorption spectrum of a material given its emission spectrum. Conventional bispectral measurement of absorption and emission spectra needs to examine all combinations of incident and observed light wavelengths. In contrast, our method requires only two hyperspectral images. The effectiveness of our proposed methods are then evaluated through a combination of simulation and real experiments. We also demonstrate an application of our method to synthetic relighting of real scenes.
5 0.74236053 82 iccv-2013-Compensating for Motion during Direct-Global Separation
Author: Supreeth Achar, Stephen T. Nuske, Srinivasa G. Narasimhan
Abstract: Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to beperformed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.
6 0.72885239 207 iccv-2013-Illuminant Chromaticity from Image Sequences
7 0.67762786 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes
8 0.65260452 30 iccv-2013-A Simple Model for Intrinsic Image Decomposition with Depth Cues
9 0.63566303 262 iccv-2013-Matching Dry to Wet Materials
10 0.63398391 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects
11 0.62720716 199 iccv-2013-High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination
12 0.5890736 135 iccv-2013-Efficient Image Dehazing with Boundary Constraint and Contextual Regularization
13 0.5756197 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects
14 0.57332009 5 iccv-2013-A Color Constancy Model with Double-Opponency Mechanisms
15 0.54552817 413 iccv-2013-Target-Driven Moire Pattern Synthesis by Phase Modulation
16 0.53874248 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces
17 0.53590739 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal
18 0.52603012 255 iccv-2013-Local Signal Equalization for Correspondence Matching
19 0.5167017 284 iccv-2013-Multiview Photometric Stereo Using Planar Mesh Parameterization
20 0.49820679 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera
topicId topicWeight
[(2, 0.085), (7, 0.032), (12, 0.025), (26, 0.076), (31, 0.037), (40, 0.014), (42, 0.075), (48, 0.016), (54, 0.236), (64, 0.058), (73, 0.029), (89, 0.198), (98, 0.022)]
simIndex simValue paperId paperTitle
1 0.83668143 389 iccv-2013-Shortest Paths with Curvature and Torsion
Author: Petter Strandmark, Johannes Ulén, Fredrik Kahl, Leo Grady
Abstract: This paper describes a method of finding thin, elongated structures in images and volumes. We use shortest paths to minimize very general functionals of higher-order curve properties, such as curvature and torsion. Our globally optimal method uses line graphs and its runtime is polynomial in the size of the discretization, often in the order of seconds on a single computer. To our knowledge, we are the first to perform experiments in three dimensions with curvature and torsion regularization. The largest graphs we process have almost one hundred billion arcs. Experiments on medical images and in multi-view reconstruction show the significance and practical usefulness of regularization based on curvature while torsion is still only tractable for small-scale problems.
same-paper 2 0.81732196 407 iccv-2013-Subpixel Scanning Invariant to Indirect Lighting Using Quadratic Code Length
Author: Nicolas Martin, Vincent Couture, Sébastien Roy
Abstract: We present a scanning method that recovers dense subpixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. Subpixel accuracy is achieved by considering several zero-crossings defined by the difference between pairs of unstructured patterns. We use gray-level band-pass white noise patterns that increase robustness to indirect lighting and scene discontinuities. Simulated and experimental results show that our method recovers scene geometry with high subpixel precision, and that it can handle many challenges of active reconstruction systems. We compare our results to state of the art methods such as micro phase shifting and modulated phase shifting.
3 0.81315506 68 iccv-2013-Camera Alignment Using Trajectory Intersections in Unsynchronized Videos
Author: Thomas Kuo, Santhoshkumar Sunderrajan, B.S. Manjunath
Abstract: This paper addresses the novel and challenging problem of aligning camera views that are unsynchronized by low and/or variable frame rates using object trajectories. Unlike existing trajectory-based alignment methods, our method does not require frame-to-frame synchronization. Instead, we propose using the intersections of corresponding object trajectories to match views. To find these intersections, we introduce a novel trajectory matching algorithm based on matching Spatio-Temporal Context Graphs (STCGs). These graphs represent the distances between trajectories in time and space within a view, and are matched to an STCG from another view to find the corresponding trajectories. To the best of our knowledge, this is one of the first attempts to align views that are unsynchronized with variable frame rates. The results on simulated and real-world datasets show trajectory intersections area viablefeatureforcamera alignment, and that the trajectory matching method performs well in real-world scenarios.
4 0.77635306 293 iccv-2013-Nonparametric Blind Super-resolution
Author: Tomer Michaeli, Michal Irani
Abstract: Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function ‘PSF’ of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for “blind” super resolution. In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. This leads to significant improvement in SR results.
5 0.7578789 147 iccv-2013-Event Recognition in Photo Collections with a Stopwatch HMM
Author: Lukas Bossard, Matthieu Guillaumin, Luc Van_Gool
Abstract: The task of recognizing events in photo collections is central for automatically organizing images. It is also very challenging, because of the ambiguity of photos across different event classes and because many photos do not convey enough relevant information. Unfortunately, the field still lacks standard evaluation data sets to allow comparison of different approaches. In this paper, we introduce and release a novel data set of personal photo collections containing more than 61,000 images in 807 collections, annotated with 14 diverse social event classes. Casting collections as sequential data, we build upon recent and state-of-the-art work in event recognition in videos to propose a latent sub-event approach for event recognition in photo collections. However, photos in collections are sparsely sampled over time and come in bursts from which transpires the importance of specific moments for the photographers. Thus, we adapt a discriminative hidden Markov model to allow the transitions between states to be a function of the time gap between consecutive images, which we coin as Stopwatch Hidden Markov model (SHMM). In our experiments, we show that our proposed model outperforms approaches based only on feature pooling or a classical hidden Markov model. With an average accuracy of 56%, we also highlight the difficulty of the data set and the need for future advances in event recognition in photo collections.
6 0.72083402 426 iccv-2013-Training Deformable Part Models with Decorrelated Features
7 0.71790254 396 iccv-2013-Space-Time Robust Representation for Action Recognition
8 0.7166357 404 iccv-2013-Structured Forests for Fast Edge Detection
9 0.71640199 127 iccv-2013-Dynamic Pooling for Complex Event Recognition
10 0.71558213 265 iccv-2013-Mining Motion Atoms and Phrases for Complex Action Recognition
11 0.71545392 299 iccv-2013-Online Video SEEDS for Temporal Window Objectness
12 0.71538389 160 iccv-2013-Fast Object Segmentation in Unconstrained Video
13 0.7149514 448 iccv-2013-Weakly Supervised Learning of Image Partitioning Using Decision Trees with Structured Split Criteria
14 0.71479499 107 iccv-2013-Deformable Part Descriptors for Fine-Grained Recognition and Attribute Prediction
15 0.71420836 89 iccv-2013-Constructing Adaptive Complex Cells for Robust Visual Tracking
16 0.71399528 4 iccv-2013-ACTIVE: Activity Concept Transitions in Video Event Classification
17 0.713943 260 iccv-2013-Manipulation Pattern Discovery: A Nonparametric Bayesian Approach
18 0.71393156 204 iccv-2013-Human Attribute Recognition by Rich Appearance Dictionary
19 0.71391845 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization
20 0.71361476 439 iccv-2013-Video Co-segmentation for Meaningful Action Extraction