iccv iccv2013 iccv2013-82 knowledge-graph by maker-knowledge-mining

82 iccv-2013-Compensating for Motion during Direct-Global Separation


Source: pdf

Author: Supreeth Achar, Stephen T. Nuske, Srinivasa G. Narasimhan

Abstract: Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to beperformed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Narasimhan Robotics Institute, Carnegie Mellon University Abstract Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. [sent-3, score-0.552]

2 Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. [sent-4, score-0.99]

3 In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to beperformed on video sequences of dynamic scenes captured by moving projector-camera systems. [sent-5, score-0.982]

4 Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. [sent-6, score-0.348]

5 We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. [sent-7, score-0.887]

6 Introduction The radiance of a scene point illuminated by a light source is the sum of the direct and global components. [sent-10, score-0.725]

7 The direct component is the light from the source that undergoes a single reflection in the scene before reaching the observer. [sent-11, score-0.594]

8 The global component is due to indirect lighting from inter reflections, subsurface scattering, volumetric scattering and diffusion. [sent-12, score-0.448]

9 Separating the direct and global components of illumination provides valuable insights into how light interacts with a scene. [sent-13, score-0.7]

10 Being able to extract the direct component of illumination can improve the performance of classical photometry based algorithms like shape from shading as well as structured light reconstruction which typically do not account for global effects. [sent-14, score-0.76]

11 An efficient method for finding the global and direct components was first proposed in [14]. [sent-16, score-0.396]

12 The light source, camera and scene need to remain stationary during image acquisition. [sent-19, score-0.382]

13 Single shot structured light methods [9] can be used on dynamic scenes but have low spatial resolution while multi-image methods [17] produce high quality depth estimates but require the scene to remain stationary. [sent-22, score-0.538]

14 There has been work on developing motion compensation schemes to allow multi-image structured light algorithms to be applied to dynamic scenes [10, 20]. [sent-23, score-0.819]

15 One approach is interleaving the projector patterns for structure estimation with uniformly lighting for motion tracking. [sent-24, score-0.905]

16 Most structured light algorithms do not account for global illumination and those that do [3, 5] require many additional images. [sent-25, score-0.454]

17 In this work, we address motion compensation in the context of direct-global separation. [sent-26, score-0.503]

18 This allows separation to be performed on video sequences in which the projector-camera system and/or the scene are moving. [sent-28, score-0.367]

19 We assume that the underlying global and direct components of a scene point vary only slightly over small motions. [sent-29, score-0.489]

20 This means that if the frames in a temporal window can be aligned, the separation technique in [14] can be applied to the aligned frames. [sent-30, score-0.688]

21 We use these relit images to aid alignment and then estimate the global and direct components from the aligned images. [sent-33, score-0.722]

22 We use all the frames in a temporal window for estimating the global and direct components. [sent-35, score-0.668]

23 No frames are used exclusively for tracking, so our method can handle faster motions than interleaving at a given frame rate. [sent-36, score-0.513]

24 We show that our method compensates for motion effectively and generates separation results close to ground truth. [sent-38, score-0.438]

25 We show that not compensating for motion introduces significant artifacts in the separation and compare our method to alternatives such as single shot separation and interleaving. [sent-39, score-0.847]

26 Related Work The original work on direct-global separation [14] describes methods for separation using active illumination and source occluders. [sent-44, score-0.721]

27 With active illumination, the separation can be performed using three sinusoid patterns, but the best results with practical projector-camera systems require around 20 high frequency pattern images. [sent-45, score-0.392]

28 A method that uses a single image was also presented, but it generates results at a fraction of the projector’s resolution which is undesirable since most projector-camera systems are projector resolution limited. [sent-46, score-0.418]

29 In [15] an optical processing method that can be used to directly acquire the global component of illumination is presented. [sent-47, score-0.423]

30 Global illumination and projector defocus were modeled jointly in [6] for depth recovery in scenes with significant global light transport effects. [sent-48, score-0.9]

31 In [4], the separation technique was extended to scenes illuminated by multiple controllable light sources. [sent-49, score-0.595]

32 Their goal was to extract the direct component for each light source to aid structure recovery techniques where global illumination is often a severe source of systematic error. [sent-50, score-0.851]

33 The need for motion compensation also arises in structured light for 3D estimation. [sent-51, score-0.718]

34 [19] developed structured light patterns that can be decoded both spatially and temporally which allows for motion adaptation. [sent-53, score-0.515]

35 In [20] a motion compensation method for the phase shift structured light algorithm is presented. [sent-55, score-0.718]

36 Motion estimation and compensation in image sequences with projected patterns is often done by interleaving the patterns with uniform lighting [21]. [sent-57, score-0.879]

37 A similar ap- proach is used in the structured light motion compensation scheme in [10] where patterns for structure estimation are interleaved with patterns optimized for estimating motion. [sent-58, score-0.969]

38 An alternative optical flow formulation was derived in [18] that uses a direct search to compute optical flow and which can accommodate arbitrary data loss terms. [sent-65, score-0.578]

39 Limitations We do not model changes in the underlying direct and global components at a scene point within a small temporal window. [sent-69, score-0.555]

40 Image Formation Model The brightness It (x) of a pixel x at time t is a combination of the direct component Idt and global component Igt. [sent-73, score-0.633]

41 When a binary pattern illuminates the scene, the direct component is modulated by the pattern. [sent-74, score-0.392]

42 If the pattern has an equal number of bright and dark pixels and has high spatial frequency compared to Igt, the contribution of the global illumination to the brightness is 12Igt [14]. [sent-75, score-0.46]

43 We colocate our projector and camera so the mapping between projector and camera pixels is fixed and independent of scene geometry. [sent-77, score-0.909]

44 Even though the patterns are binary, the value of st at a pixel can be continuous because real projectors do not have ideal step responses and the projector and camera pixels need not be aligned. [sent-78, score-0.573]

45 The specularities on the candles = appear in the direct image and most of the color is due to subsurface scattering in the wax and appears in the global image. [sent-83, score-0.61]

46 We assume that the motion within a sliding window is small enough for these changes to be negligible. [sent-85, score-0.431]

47 This allows us to relate the global and direct components at time instant t in the sliding window to time 0 Ig0(x) ≈ Igt(Wt(x)) Id0(x) ≈ Idt(Wt(x)) where, Wt is an (unknown) warping function that aligns the view at time 0 to the view at time t. [sent-86, score-0.663]

48 Wt depends on the geometry of the scene and the motion of the scene and projector-camera system. [sent-88, score-0.35]

49 Motion Estimation and Compensation We compute the direct-global separation at a frame in the video sequence using a small temporal sliding window centered at that frame. [sent-92, score-0.693]

50 We seek to compensate for the motion that occurs inside a temporal sliding window so that the frames can be aligned to each other. [sent-93, score-0.73]

51 With the help of the image formation model, we estimate how the scene would have appeared at each time instant under uniform lighting instead of the patterned illumination. [sent-94, score-0.336]

52 Once the images are aligned we can compute the global and direct components robustly. [sent-96, score-0.458]

53 The patterns violate the brightness and contrast constancy assumptions most optical flow methods rely on. [sent-100, score-0.382]

54 To aid alignment, we compute an approximation of how the scene would have appeared (I˜ft) under uniform illumination from the frame It and the pattern st used to illuminate the scene. [sent-101, score-0.595]

55 Under uniform illumination, the brightness at a pixel is the sum of two unknowns, the direct component and the global component Ift (x) = Igt (x) + Idt(x). [sent-103, score-0.689]

56 To find an approximate solution to the problem, we introduce a regularizer that enforces piecewise spatial continuity of the estimated global and direct components and respectively). [sent-106, score-0.425]

57 These artifacts are caused by projector blur and small errors in the colocation between the projector and camera. [sent-124, score-0.778]

58 Registering Images To align a frame to the center frame, we could simply compute optical flow between the relit frames. [sent-131, score-0.544]

59 x where, α(x, Wt) is a weight that is high when a point is lit (s close to 1) in both the center frame I0 and the current frame It. [sent-140, score-0.324]

60 Because we are seeking 2 to correct small errors in an existing optical flow estimate we search for an refined warp at each pixel using a small window centered around the original warp estimate. [sent-149, score-0.525]

61 If the motion that occurs in a sliding window is large, optical flow may fail to correctly align some frames to the center frame. [sent-150, score-0.852]

62 We detect poorly aligned frames by thresholding the correlation between the warped frame Wt ◦ I˜tf and center frame I˜f. [sent-151, score-0.422]

63 Poorly aligned frames are discarded from the sliding window. [sent-152, score-0.325]

64 Computing Direct-Global Separation Once the frames in a window have been warped to align with the center frame, we in effect have a set of images of the scene captured from the same viewpoint with different illumination patterns. [sent-155, score-0.62]

65 Alternatively, since the projector pattern values (st) at each pixel are known, the global and direct components can be determined by fitting a line to the observed brightness values at a pixel using equation 1. [sent-157, score-0.983]

66 For this line fit to make sense, each pixel needs to be observed under a range of projector pattern values. [sent-158, score-0.446]

67 As a result, there will be pixels in the image where the global and direct components can not be estimated well because the projector brightness did not change sufficiently at the corresponding scene point. [sent-160, score-0.948]

68 We search for piecewise continuous global and direct components that are a good fit to the observed aligned image data by minimizing L(Ig0, Id0) = ? [sent-162, score-0.487]

69 t∈T + λgTV (Ig0) + λdTV (Id0) (5) where, T is the sliding window of frames selected about the center frame. [sent-168, score-0.455]

70 For all experiments, the camera was radiometrically calibrated to have a linear response curve and the camera and projector were colocated using a plate beam splitter. [sent-179, score-0.535]

71 To correct for projector vignetting, all images were normalized with respect to a reference image of the same planar surface while fully lit by the projector. [sent-183, score-0.461]

72 Comparisons on Rigidly Moving Scenes The goal of these experiments is to compare the direct and global components generated by our algorithm on moving scenes to ground truth and to analyze the effect of temporal window size on separation accuracy. [sent-187, score-1.019]

73 Ground truth was acquired by first capturing 25 frames of a scene while projecting checkerboard patterns at different offsets. [sent-188, score-0.333]

74 The direct and global components calculated on these 25 frames are used as ground truth (RMS Error 0). [sent-190, score-0.537]

75 We then captured a video sequence with the scene in motion while patterns were being projected. [sent-191, score-0.356]

76 The regularization improves performance when the number offrames is small and many pixels have not seen enough different projector pattern values. [sent-201, score-0.408]

77 For the video sequence corresponding to each trial, we tested our motion compensation method with different sliding window sizes using the first frame as the window cen- formed using a camera and projector colocated with a plate beamsplitter. [sent-203, score-1.484]

78 We evaluated the motion compensation with the warp refinement described in 3. [sent-206, score-0.584]

79 We also tested an interleaved approach where the projector alternates between patterns and uniform illumination (’Interleaved’ in Fig. [sent-210, score-0.705]

80 When the number of frames used is small, the regularized static method and the proposed motion compensated methods perform similarly. [sent-215, score-0.445]

81 As the number of frames increases, the improvement in the motion compensated output reduces and then stops. [sent-216, score-0.383]

82 When the window size is large, the frames near the edges of the sliding window can not be aligned to the center frame because the viewpoint changes are too large and the global and direct components of the scene points change appreciably. [sent-217, score-1.237]

83 The motion compensation algorithm automatically discards these frames and they yield no improvement in the results. [sent-218, score-0.644]

84 The temporal window available for performing separation on dynamic scenes is small. [sent-220, score-0.586]

85 5 shows results from our motion compensation algorithm and interleaving with different temporal window sizes in an example scene. [sent-225, score-0.967]

86 The blue ‘static’ curves are from direct-global separation on stationary scenes and represent the best possible performance a method could achieve for a given number of frames. [sent-233, score-0.383]

87 The red ‘moving’ curves are from using our motion compensation algorithm on moving scenes. [sent-234, score-0.575]

88 When the number of frames is small, the motion compensation method performs just as well on the moving sequences as normal separation on an equal number of static frames. [sent-235, score-1.052]

89 When the window size increases, frames far away from the window center are discarded because alignment fails and so performance of the motion compensated algorithm levels off. [sent-236, score-0.758]

90 Without motion compensation, the results are blurred and edges in the scene (around the fingers for example) are corrupted. [sent-240, score-0.323]

91 Discussion Although we do not model the changes in global and direct components that occur within a small temporal window, our method is still able to handle broad specular lobes like shiny surfaces on wax and highlights on skin. [sent-244, score-0.607]

92 Sharp specularities and specular inter reflections such as those from polished metal surfaces would cause both the image alignment and component separation steps to break down. [sent-245, score-0.637]

93 The fast direct-global separation algorithm for static scenes can handle sharp specularities but not specular inter reflections. [sent-246, score-0.584]

94 Using shorter exposure times and smaller apertures to avoid motion blur and defocus means that less light reaches the camera and image noise becomes more of a problem. [sent-251, score-0.424]

95 We would need to consider how computational photography methods like coded aperture for motion deblurring [16] and light efficient photography [7] could be applied. [sent-252, score-0.447]

96 Multiplexed illumination for scene recovery in the presence of global illumination. [sent-277, score-0.363]

97 Our method makes more efficient use of images than interleaving because no frames are needed exclusively for tracking. [sent-301, score-0.394]

98 Separations that resolve a given level of detail can be obtained with a smaller temporal sliding window than an interleaving approach. [sent-302, score-0.586]

99 Fast separation of direct and global components of a scene using high frequency illumination. [sent-356, score-0.829]

100 The two columns on the right show the component estimates on the same frames using our motion compensation method. [sent-405, score-0.767]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('projector', 0.356), ('compensation', 0.339), ('separation', 0.274), ('interleaving', 0.253), ('direct', 0.218), ('relit', 0.178), ('motion', 0.164), ('light', 0.163), ('window', 0.145), ('frames', 0.141), ('illumination', 0.141), ('sliding', 0.122), ('wt', 0.121), ('lit', 0.105), ('brightness', 0.103), ('patterns', 0.099), ('global', 0.098), ('optical', 0.096), ('scene', 0.093), ('scattering', 0.093), ('illuminated', 0.092), ('component', 0.088), ('frame', 0.086), ('flow', 0.084), ('subsurface', 0.083), ('warp', 0.081), ('components', 0.08), ('compensated', 0.078), ('specular', 0.078), ('colocated', 0.075), ('igt', 0.075), ('separations', 0.075), ('rms', 0.075), ('moving', 0.072), ('patterned', 0.067), ('wax', 0.067), ('artifacts', 0.066), ('warps', 0.066), ('scenes', 0.066), ('temporal', 0.066), ('frequency', 0.066), ('static', 0.062), ('aligned', 0.062), ('idt', 0.058), ('uniform', 0.056), ('reflections', 0.055), ('inter', 0.053), ('interleaved', 0.053), ('align', 0.053), ('camera', 0.052), ('structured', 0.052), ('pattern', 0.052), ('specularities', 0.051), ('gtv', 0.05), ('illuminate', 0.05), ('materials', 0.048), ('aid', 0.048), ('scanning', 0.047), ('center', 0.047), ('formation', 0.046), ('defocus', 0.045), ('diffuse', 0.045), ('stationary', 0.043), ('deblurring', 0.042), ('dtv', 0.041), ('fabric', 0.041), ('illuminating', 0.041), ('appeared', 0.041), ('gt', 0.04), ('photography', 0.039), ('pixel', 0.038), ('alignment', 0.038), ('skin', 0.037), ('compensating', 0.037), ('decoded', 0.037), ('smooths', 0.037), ('taguchi', 0.037), ('tf', 0.037), ('estimates', 0.035), ('dynamic', 0.035), ('modulated', 0.034), ('blurred', 0.033), ('lighting', 0.033), ('motions', 0.033), ('fingers', 0.033), ('raskar', 0.033), ('acquisition', 0.033), ('shot', 0.032), ('linearization', 0.032), ('relaxes', 0.032), ('narasimhan', 0.032), ('source', 0.032), ('recovery', 0.031), ('remain', 0.031), ('resolution', 0.031), ('compensate', 0.03), ('radiance', 0.029), ('piecewise', 0.029), ('tv', 0.028), ('st', 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 82 iccv-2013-Compensating for Motion during Direct-Global Separation

Author: Supreeth Achar, Stephen T. Nuske, Srinivasa G. Narasimhan

Abstract: Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to beperformed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.

2 0.23679315 407 iccv-2013-Subpixel Scanning Invariant to Indirect Lighting Using Quadratic Code Length

Author: Nicolas Martin, Vincent Couture, Sébastien Roy

Abstract: We present a scanning method that recovers dense subpixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. Subpixel accuracy is achieved by considering several zero-crossings defined by the difference between pairs of unstructured patterns. We use gray-level band-pass white noise patterns that increase robustness to indirect lighting and scene discontinuities. Simulated and experimental results show that our method recovers scene geometry with high subpixel precision, and that it can handle many challenges of active reconstruction systems. We compare our results to state of the art methods such as micro phase shifting and modulated phase shifting.

3 0.22365488 405 iccv-2013-Structured Light in Sunlight

Author: Mohit Gupta, Qi Yin, Shree K. Nayar

Abstract: Strong ambient illumination severely degrades the performance of structured light based techniques. This is especially true in outdoor scenarios, where the structured light sources have to compete with sunlight, whose power is often 2-5 orders of magnitude larger than the projected light. In this paper, we propose the concept of light-concentration to overcome strong ambient illumination. Our key observation is that given a fixed light (power) budget, it is always better to allocate it sequentially in several portions of the scene, as compared to spreading it over the entire scene at once. For a desired level of accuracy, we show that by distributing light appropriately, the proposed approach requires 1-2 orders lower acquisition time than existing approaches. Our approach is illumination-adaptive as the optimal light distribution is determined based on a measurement of the ambient illumination level. Since current light sources have a fixed light distribution, we have built a prototype light source that supports flexible light distribution by controlling the scanning speed of a laser scanner. We show several high quality 3D scanning results in a wide range of outdoor scenarios. The proposed approach will benefit 3D vision systems that need to operate outdoors under extreme ambient illumination levels on a limited time and power budget.

4 0.16768508 317 iccv-2013-Piecewise Rigid Scene Flow

Author: Christoph Vogel, Konrad Schindler, Stefan Roth

Abstract: Estimating dense 3D scene flow from stereo sequences remains a challenging task, despite much progress in both classical disparity and 2D optical flow estimation. To overcome the limitations of existing techniques, we introduce a novel model that represents the dynamic 3D scene by a collection of planar, rigidly moving, local segments. Scene flow estimation then amounts to jointly estimating the pixelto-segment assignment, and the 3D position, normal vector, and rigid motion parameters of a plane for each segment. The proposed energy combines an occlusion-sensitive data term with appropriate shape, motion, and segmentation regularizers. Optimization proceeds in two stages: Starting from an initial superpixelization, we estimate the shape and motion parameters of all segments by assigning a proposal from a set of moving planes. Then the pixel-to-segment assignment is updated, while holding the shape and motion parameters of the moving planes fixed. We demonstrate the benefits of our model on different real-world image sets, including the challenging KITTI benchmark. We achieve leading performance levels, exceeding competing 3D scene flow methods, and even yielding better 2D motion estimates than all tested dedicated optical flow techniques.

5 0.16245642 12 iccv-2013-A General Dense Image Matching Framework Combining Direct and Feature-Based Costs

Author: Jim Braux-Zin, Romain Dupont, Adrien Bartoli

Abstract: Dense motion field estimation (typically Romain Dupont1 romain . dupont @ cea . fr Adrien Bartoli2 adrien . bart o l @ gmai l com i . 2 ISIT, Universit e´ d’Auvergne/CNRS, France sions are explicitly modeled [32, 13]. Coarse-to-fine warping improves global convergence by making the assumption that optical flow, the motion of smaller structures is similar to the motion of stereo disparity and surface registration) is a key computer vision problem. Many solutions have been proposed to compute small or large displacements, narrow or wide baseline stereo disparity, but a unified methodology is still lacking. We here introduce a general framework that robustly combines direct and feature-based matching. The feature-based cost is built around a novel robust distance function that handles keypoints and “weak” features such as segments. It allows us to use putative feature matches which may contain mismatches to guide dense motion estimation out of local minima. Our framework uses a robust direct data term (AD-Census). It is implemented with a powerful second order Total Generalized Variation regularization with external and self-occlusion reasoning. Our framework achieves state of the art performance in several cases (standard optical flow benchmarks, wide-baseline stereo and non-rigid surface registration). Our framework has a modular design that customizes to specific application needs.

6 0.14480186 78 iccv-2013-Coherent Motion Segmentation in Moving Camera Videos Using Optical Flow Orientations

7 0.1382083 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects

8 0.13388938 160 iccv-2013-Fast Object Segmentation in Unconstrained Video

9 0.1329602 39 iccv-2013-Action Recognition with Improved Trajectories

10 0.12605864 300 iccv-2013-Optical Flow via Locally Adaptive Fusion of Complementary Data Costs

11 0.12309498 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces

12 0.1221474 105 iccv-2013-DeepFlow: Large Displacement Optical Flow with Deep Matching

13 0.12094256 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes

14 0.11250237 270 iccv-2013-Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking

15 0.10793687 199 iccv-2013-High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination

16 0.10702511 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

17 0.10409191 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal

18 0.10351512 414 iccv-2013-Temporally Consistent Superpixels

19 0.10277539 174 iccv-2013-Forward Motion Deblurring

20 0.10100853 129 iccv-2013-Dynamic Scene Deblurring


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.21), (1, -0.156), (2, -0.008), (3, 0.115), (4, -0.025), (5, 0.033), (6, 0.025), (7, -0.067), (8, 0.064), (9, 0.015), (10, -0.014), (11, -0.007), (12, 0.15), (13, -0.037), (14, -0.07), (15, -0.051), (16, -0.073), (17, 0.063), (18, 0.046), (19, 0.016), (20, 0.055), (21, -0.051), (22, 0.022), (23, -0.057), (24, -0.146), (25, 0.002), (26, 0.015), (27, 0.018), (28, 0.114), (29, -0.172), (30, 0.125), (31, 0.003), (32, 0.042), (33, 0.027), (34, 0.007), (35, 0.098), (36, -0.022), (37, -0.067), (38, -0.156), (39, 0.063), (40, 0.012), (41, 0.01), (42, 0.049), (43, 0.011), (44, -0.001), (45, -0.087), (46, 0.051), (47, 0.15), (48, 0.058), (49, -0.011)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96166867 82 iccv-2013-Compensating for Motion during Direct-Global Separation

Author: Supreeth Achar, Stephen T. Nuske, Srinivasa G. Narasimhan

Abstract: Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to beperformed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.

2 0.82434481 405 iccv-2013-Structured Light in Sunlight

Author: Mohit Gupta, Qi Yin, Shree K. Nayar

Abstract: Strong ambient illumination severely degrades the performance of structured light based techniques. This is especially true in outdoor scenarios, where the structured light sources have to compete with sunlight, whose power is often 2-5 orders of magnitude larger than the projected light. In this paper, we propose the concept of light-concentration to overcome strong ambient illumination. Our key observation is that given a fixed light (power) budget, it is always better to allocate it sequentially in several portions of the scene, as compared to spreading it over the entire scene at once. For a desired level of accuracy, we show that by distributing light appropriately, the proposed approach requires 1-2 orders lower acquisition time than existing approaches. Our approach is illumination-adaptive as the optimal light distribution is determined based on a measurement of the ambient illumination level. Since current light sources have a fixed light distribution, we have built a prototype light source that supports flexible light distribution by controlling the scanning speed of a laser scanner. We show several high quality 3D scanning results in a wide range of outdoor scenarios. The proposed approach will benefit 3D vision systems that need to operate outdoors under extreme ambient illumination levels on a limited time and power budget.

3 0.80430186 207 iccv-2013-Illuminant Chromaticity from Image Sequences

Author: Veronique Prinet, Dani Lischinski, Michael Werman

Abstract: We estimate illuminant chromaticity from temporal sequences, for scenes illuminated by either one or two dominant illuminants. While there are many methods for illuminant estimation from a single image, few works so far have focused on videos, and even fewer on multiple light sources. Our aim is to leverage information provided by the temporal acquisition, where either the objects or the camera or the light source are/is in motion in order to estimate illuminant color without the need for user interaction or using strong assumptions and heuristics. We introduce a simple physically-based formulation based on the assumption that the incident light chromaticity is constant over a short space-time domain. We show that a deterministic approach is not sufficient for accurate and robust estimation: however, a probabilistic formulation makes it possible to implicitly integrate away hidden factors that have been ignored by the physical model. Experimental results are reported on a dataset of natural video sequences and on the GrayBall benchmark, indicating that we compare favorably with the state-of-the-art.

4 0.75787312 385 iccv-2013-Separating Reflective and Fluorescent Components Using High Frequency Illumination in the Spectral Domain

Author: Ying Fu, Antony Lam, Imari Sato, Takahiro Okabe, Yoichi Sato

Abstract: Hyperspectral imaging is beneficial to many applications but current methods do not consider fluorescent effects which are present in everyday items ranging from paper, to clothing, to even our food. Furthermore, everyday fluorescent items exhibit a mix of reflectance and fluorescence. So proper separation of these components is necessary for analyzing them. In this paper, we demonstrate efficient separation and recovery of reflective and fluorescent emission spectra through the use of high frequency illumination in the spectral domain. With the obtained fluorescent emission spectra from our high frequency illuminants, we then present to our knowledge, the first method for estimating the fluorescent absorption spectrum of a material given its emission spectrum. Conventional bispectral measurement of absorption and emission spectra needs to examine all combinations of incident and observed light wavelengths. In contrast, our method requires only two hyperspectral images. The effectiveness of our proposed methods are then evaluated through a combination of simulation and real experiments. We also demonstrate an application of our method to synthetic relighting of real scenes.

5 0.74118066 407 iccv-2013-Subpixel Scanning Invariant to Indirect Lighting Using Quadratic Code Length

Author: Nicolas Martin, Vincent Couture, Sébastien Roy

Abstract: We present a scanning method that recovers dense subpixel camera-projector correspondence without requiring any photometric calibration nor preliminary knowledge of their relative geometry. Subpixel accuracy is achieved by considering several zero-crossings defined by the difference between pairs of unstructured patterns. We use gray-level band-pass white noise patterns that increase robustness to indirect lighting and scene discontinuities. Simulated and experimental results show that our method recovers scene geometry with high subpixel precision, and that it can handle many challenges of active reconstruction systems. We compare our results to state of the art methods such as micro phase shifting and modulated phase shifting.

6 0.69376415 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes

7 0.68001831 164 iccv-2013-Fibonacci Exposure Bracketing for High Dynamic Range Imaging

8 0.65344614 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction

9 0.63228035 145 iccv-2013-Estimating the Material Properties of Fabric from Video

10 0.61183983 262 iccv-2013-Matching Dry to Wet Materials

11 0.61037922 30 iccv-2013-A Simple Model for Intrinsic Image Decomposition with Depth Cues

12 0.60019672 5 iccv-2013-A Color Constancy Model with Double-Opponency Mechanisms

13 0.59519202 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal

14 0.56929374 275 iccv-2013-Motion-Aware KNN Laplacian for Video Matting

15 0.56824362 301 iccv-2013-Optimal Orthogonal Basis and Image Assimilation: Motion Modeling

16 0.54141486 78 iccv-2013-Coherent Motion Segmentation in Moving Camera Videos Using Optical Flow Orientations

17 0.53521651 128 iccv-2013-Dynamic Probabilistic Volumetric Models

18 0.51832646 173 iccv-2013-Fluttering Pattern Generation Using Modified Legendre Sequence for Coded Exposure Imaging

19 0.51638728 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects

20 0.51453012 270 iccv-2013-Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.068), (7, 0.024), (13, 0.016), (26, 0.081), (31, 0.043), (40, 0.019), (42, 0.09), (48, 0.015), (64, 0.076), (73, 0.059), (74, 0.2), (89, 0.201), (98, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.84815359 82 iccv-2013-Compensating for Motion during Direct-Global Separation

Author: Supreeth Achar, Stephen T. Nuske, Srinivasa G. Narasimhan

Abstract: Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to beperformed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.

2 0.84511083 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

Author: Masakazu Iwamura, Tomokazu Sato, Koichi Kise

Abstract: Approximate nearest neighbor search (ANNS) is a basic and important technique used in many tasks such as object recognition. It involves two processes: selecting nearest neighbor candidates and performing a brute-force search of these candidates. Only the former though has scope for improvement. In most existing methods, it approximates the space by quantization. It then calculates all the distances between the query and all the quantized values (e.g., clusters or bit sequences), and selects a fixed number of candidates close to the query. The performance of the method is evaluated based on accuracy as a function of the number of candidates. This evaluation seems rational but poses a serious problem; it ignores the computational cost of the process of selection. In this paper, we propose a new ANNS method that takes into account costs in the selection process. Whereas existing methods employ computationally expensive techniques such as comparative sort and heap, the proposed method does not. This realizes a significantly more efficient search. We have succeeded in reducing computation times by one-third compared with the state-of-the- art on an experiment using 100 million SIFT features.

3 0.84339595 300 iccv-2013-Optical Flow via Locally Adaptive Fusion of Complementary Data Costs

Author: Tae Hyun Kim, Hee Seok Lee, Kyoung Mu Lee

Abstract: Many state-of-the-art optical flow estimation algorithms optimize the data and regularization terms to solve ill-posed problems. In this paper, in contrast to the conventional optical flow framework that uses a single or fixed data model, we study a novel framework that employs locally varying data term that adaptively combines different multiple types of data models. The locally adaptive data term greatly reduces the matching ambiguity due to the complementary nature of the multiple data models. The optimal number of complementary data models is learnt by minimizing the redundancy among them under the minimum description length constraint (MDL). From these chosen data models, a new optical flow estimation energy model is designed with the weighted sum of the multiple data models, and a convex optimization-based highly effective and practical solution thatfinds the opticalflow, as well as the weights isproposed. Comparative experimental results on the Middlebury optical flow benchmark show that the proposed method using the complementary data models outperforms the state-ofthe art methods.

4 0.82171595 426 iccv-2013-Training Deformable Part Models with Decorrelated Features

Author: Ross Girshick, Jitendra Malik

Abstract: In this paper, we show how to train a deformable part model (DPM) fast—typically in less than 20 minutes, or four times faster than the current fastest method—while maintaining high average precision on the PASCAL VOC datasets. At the core of our approach is “latent LDA,” a novel generalization of linear discriminant analysis for learning latent variable models. Unlike latent SVM, latent LDA uses efficient closed-form updates and does not require an expensive search for hard negative examples. Our approach also acts as a springboard for a detailed experimental study of DPM training. We isolate and quantify the impact of key training factors for the first time (e.g., How important are discriminative SVM filters? How important is joint parameter estimation? How many negative images are needed for training?). Our findings yield useful insights for researchers working with Markov random fields and partbased models, and have practical implications for speeding up tasks such as model selection.

5 0.82049704 239 iccv-2013-Learning Hash Codes with Listwise Supervision

Author: Jun Wang, Wei Liu, Andy X. Sun, Yu-Gang Jiang

Abstract: Hashing techniques have been intensively investigated in the design of highly efficient search engines for largescale computer vision applications. Compared with prior approximate nearest neighbor search approaches like treebased indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. How- ever, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pairwise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage listwise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via listwise supervision can provide superior search accuracy without incurring heavy computational overhead.

6 0.8204025 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

7 0.7816242 89 iccv-2013-Constructing Adaptive Complex Cells for Robust Visual Tracking

8 0.78049397 420 iccv-2013-Topology-Constrained Layered Tracking with Latent Flow

9 0.78031492 425 iccv-2013-Tracking via Robust Multi-task Multi-view Joint Sparse Representation

10 0.78003335 379 iccv-2013-Semantic Segmentation without Annotating Segments

11 0.7798326 196 iccv-2013-Hierarchical Data-Driven Descent for Efficient Optimal Deformation Estimation

12 0.77974308 359 iccv-2013-Robust Object Tracking with Online Multi-lifespan Dictionary Learning

13 0.77971315 204 iccv-2013-Human Attribute Recognition by Rich Appearance Dictionary

14 0.77970076 160 iccv-2013-Fast Object Segmentation in Unconstrained Video

15 0.77928078 65 iccv-2013-Breaking the Chain: Liberation from the Temporal Markov Assumption for Tracking Human Poses

16 0.77837193 95 iccv-2013-Cosegmentation and Cosketch by Unsupervised Learning

17 0.77771372 57 iccv-2013-BOLD Features to Detect Texture-less Objects

18 0.77755576 396 iccv-2013-Space-Time Robust Representation for Action Recognition

19 0.77718639 338 iccv-2013-Randomized Ensemble Tracking

20 0.777098 340 iccv-2013-Real-Time Articulated Hand Pose Estimation Using Semi-supervised Transductive Regression Forests