cvpr cvpr2013 cvpr2013-37 knowledge-graph by maker-knowledge-mining

37 cvpr-2013-Adherent Raindrop Detection and Removal in Video


Source: pdf

Author: Shaodi You, Robby T. Tan, Rei Kawakami, Katsushi Ikeuchi

Abstract: Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatiotemporal derivatives ofraindrops. First, it detects raindrops based on the motion and the intensity temporal derivatives of the input video. Second, relying on an analysis that some areas of a raindrop completely occludes the scene, yet the remaining areas occludes only partially, the method removes the two types of areas separately. For partially occluding areas, it restores them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity change. For completely occluding areas, it recovers them by using a video completion technique. Experimental results using various real videos show the effectiveness of the proposed method.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. [sent-7, score-0.466]

2 In this paper, a method that automatically detects and removes adherent raindrops is introduced. [sent-8, score-0.658]

3 First, it detects raindrops based on the motion and the intensity temporal derivatives of the input video. [sent-10, score-0.581]

4 Second, relying on an analysis that some areas of a raindrop completely occludes the scene, yet the remaining areas occludes only partially, the method removes the two types of areas separately. [sent-11, score-1.068]

5 For partially occluding areas, it restores them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity change. [sent-12, score-0.324]

6 For completely occluding areas, it recovers them by using a video completion technique. [sent-13, score-0.149]

7 In a rainy day, raindrops inevitably adhered to windscreens, camera lenses, or protecting shields. [sent-18, score-0.55]

8 These adherent raindrops occlude and deform some image areas, making the performances of many algorithms in the vision systems (such as feature detection, tracking, stereo correspondence, etc. [sent-19, score-0.658]

9 Identifying adherent raindrops from images can be problematic, due to a few reasons as shown in Fig. [sent-22, score-0.658]

10 nl Ikeuchi† (a)Various hapes(b)Transparency(e)Raindropdet ction (c) Blur ing (d) Glare (f) Raindrop removal Figure 1. [sent-28, score-0.13]

11 (e-f) The detection and removal result by our method. [sent-30, score-0.153]

12 To address the problems, we analyze the appearance of adherent raindrops from their local spatio-temporal derivatives. [sent-34, score-0.658]

13 First, a clear, unblurred adherent raindrop works like a fish-eye lens and significantly contracts the image of a scene. [sent-35, score-0.999]

14 Consequently, the motion inside raindrops is distinctively slower than the motion of non-raindrops. [sent-36, score-0.498]

15 Second, unlike clear raindrops, blurred raindrops are mixtures of rays originated from the points in the entire scene. [sent-37, score-0.557]

16 Because of this, the intensity temporal derivative of blurred raindrops is significantly smaller than that of non-raindrops. [sent-38, score-0.65]

17 By further analyzing the image formation of raindrops, we found that some area of a raindrop completely occludes the scene behind, however the rest occludes only partially. [sent-43, score-0.91]

18 For partially occluding areas, we restore them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected areas using the intensity change over time. [sent-44, score-0.285]

19 For completely occluding areas, we recover them by using a video completion technique. [sent-45, score-0.149]

20 The contributions of the paper are threefold: (1) Adher1 1 10 0 03 3 3 5 3 ent raindrops are theoretically modeled and analyzed using the derivative properties with few parameters, enabling the method to be applied to general video cameras, e. [sent-49, score-0.525]

21 (3) A relatively fast adherent raindrop removal method is proposed. [sent-53, score-1.129]

22 It utilizes not only a video completion technique, but also the information behind some blurred areas of raindrops. [sent-54, score-0.206]

23 2 discusses the related work on raindrop detection and removal. [sent-57, score-0.806]

24 3 explains the modeling of the spatial and temporal derivative properties on raindrop images. [sent-59, score-0.865]

25 The de- tailed methodology of the raindrop detection is described in Sec. [sent-60, score-0.806]

26 4, followed by the detailed methodology of the raindrop removal in Sec. [sent-61, score-0.913]

27 [10] introduce a detection and removal method using a single image. [sent-77, score-0.153]

28 Unfortunately, applying these methods to handle adherent raindrops is rather not possible, since the physics and appearance offalling raindrops are significantly different from those of adherent raindrops. [sent-78, score-1.316]

29 Methods for detecting adherent raindrops caused by sparse rain have been proposed. [sent-79, score-0.713]

30 attempt to model the shape of adherent raindrops by a sphere crown [14], and later, Bezier curves [15]. [sent-81, score-0.676]

31 However, the models are insufficient, since a sphere crown and Bezier curves can cover only a small portion of possible raindrop shapes (Fig. [sent-82, score-0.801]

32 [11] directly collect image templates of many raindrops and calculate their principle components. [sent-86, score-0.469]

33 propose a detection and removal method for videos taken by stereo [23] and pan-tilt [22] cameras. [sent-91, score-0.168]

34 [6] propose a method to remove glare caused by adherent raindrops by using a specifically designed optical shutter. [sent-94, score-0.766]

35 As for raindrop removal, Roser and Geiger [14] address it using image registration, and Yamashita et al. [sent-95, score-0.796]

36 (a) A raindrop is a contracted image of the environment. [sent-100, score-0.806]

37 (b) On the image plane, there is a smooth mapping ϕ starting from the raindrop into the environment. [sent-101, score-0.783]

38 (c) The contraction ratios from the environment to a raindrop are significant. [sent-102, score-0.878]

39 [9] exploit video completion by separating static background and moving foreground, and later [8] exploit video completion under cyclic motion. [sent-117, score-0.204]

40 Therefore, we consider using cues from our adherent raindrop modeling to help the removal. [sent-123, score-0.999]

41 Raindrop Modeling Unlike the previous methods [15, 11, 23, 22, 6], which try to model each raindrop as a unit object, we model raindrops locally from the derivative properties that have only few parameters. [sent-125, score-1.275]

42 a, the appearance of each raindrop is a contracted image of the environment, as if it is taken from a fish-eye-lens camera. [sent-130, score-0.806]

43 c are the contraction ratios between the original image and the image inside the raindrops calculated from the black and white patterns. [sent-133, score-0.502]

44 The contraction ratio is around 20 to 30, meaning that the motion observed inside the raindrops will be 1/30 to 1/20 slower than the other areas in the image. [sent-134, score-0.579]

45 Let us denote a point inside a raindrop on the image 1 1 10 0 03 3 346 4 AImageApl aenrteureLens? [sent-135, score-0.783]

46 Green light: the light coming from environment point; Blue light: the light refracted by a raindrop. [sent-169, score-0.211]

47 (b) The raindrop plane cut the section of model in (a) when a raindrop is big. [sent-170, score-1.588]

48 (b’) A raindrop plane cut the section when it is small. [sent-174, score-0.805]

49 Temporal Derivative of Blurred Raindrop When a camera is focused on the environment scene, raindrops will be blurred. [sent-195, score-0.532]

50 The image intensity of blurred pixels is a mixture of rays originated a point that coincides with the line of sight (the green line in Fig. [sent-196, score-0.225]

51 Let us model the image intensity of blurred pixels using a blending function. [sent-201, score-0.177]

52 We denote the light intensity collected by pixel (x, y) as I(x, y), the light intensity formed by an environment point that intersects with the line of sight without being through a raindrop as Ie (x, y), and the light intensity reached (x, y) through a raindrop as Ir (x, y). [sent-202, score-2.055]

53 Then, pixel (x, y) collecting light from both the raindrop and the environment can be described as: I(x, y) = (1 − α)Ie (x, y) + αIr (x, y) , (4) where α denotes the proportion of the light path covered by a raindrop, as depicted in Figs. [sent-203, score-1.007]

54 In A, light rays emitted from an environment point are all collected at (x, y), thus I(x, y) = Ie (x, y). [sent-209, score-0.172]

55 In consecutive frames, we observed that the intensity of blurred pixels (case B and C) does not change as distinctive as that of environment pixels (case A). [sent-215, score-0.223]

56 To analyze this property more carefully, let us look into the intensity temporal derivatives of blurred pixels. [sent-216, score-0.175]

57 a B and C, light collected from raindrop is actually refracted from a large area in the environment. [sent-219, score-0.887]

58 Ωr (x ,y) 1 1 10 0 03 3 357 5 where W(z, w) is the weight coefficient determined by the raindrop geometry. [sent-225, score-0.783]

59 (7) in a frequency domain, where the temporal high frequency component on a raindrop is significantly smaller than those of the environment, described as : − Ir (x, y, t2) |? [sent-241, score-0.859]

60 d, a raindrop will refract bright lights from the environment, and generate glare. [sent-248, score-0.783]

61 The reasons are, first, glare is the effect caused by a light source emitting high intensity light, and the spatial derivative introduced in Sec. [sent-250, score-0.282]

62 , the intensity monotonically increases until it saturates, and then it monotonically decreases until the glare fades out. [sent-255, score-0.16]

63 Raindrop Detection Feature extraction Based on the analysis of motion and the intensity temporal derivative, we generate features for (a) Image (b) Inter frame (c) Summation of (b) SIFT flow over 100 frames Figure 5. [sent-258, score-0.17]

64 (a) Feature (b) Level sets (c) Det cted raindrops Figure 7. [sent-262, score-0.442]

65 Second, we calculate the intensity temporal change |I(x, y, t1) −I(x, y, t2) |, as shown itenn Fig. [sent-270, score-0.148]

66 However, raindrop positions can shift over a certain period of time. [sent-276, score-0.783]

67 In our observation, with moderate wind, raindrops can be considered static over a few minutes. [sent-277, score-0.442]

68 The following criteria are applied further for determining raindrop areas: 1. [sent-288, score-0.783]

69 Since raindrop areas should have smaller feature values, the threshold is set to −0. [sent-291, score-0.841]

70 The level set around a raindrop area must be closed. [sent-295, score-0.799]

71 for all level sets if (feature < th1 & circumference < th2 & roundness > th3 & closed) area circle by level set is a raindrop end end ii = ii + 10 end end 3. [sent-300, score-0.967]

72 The diameter of a single raindrop area is normally smaller than 5 millimeter. [sent-302, score-0.82]

73 Thus, for our camera setting, the threshold for the raindrop circumference is loosely set less than 200 pixels. [sent-303, score-0.845]

74 A rounder shape would have a bigger roundness value and a perfect circle has the maximum roundness value: = 21π = 0. [sent-306, score-0.145]

75 , the threshold for raindrop is set greater than 0. [sent-310, score-0.783]

76 Our overall raindrop detection algorithm is described in Algorithm. [sent-315, score-0.806]

77 Raindrop Removal Although the existing methods try to restore the entire areas detected as raindrops by considering them as solid occluders [14, 22] it will be more factual if we can restore the raindrop areas from the source scenes whenever possible. [sent-318, score-1.397]

78 (4), we know that some area of a raindrop completely occludes the scene behind, however the rest occludes only partially. [sent-320, score-0.91]

79 For partially occluding areas, we re- store them by retrieving as much as possible information of the scene, and for completely occluding areas, we recover them by using a video completion technique. [sent-321, score-0.221]

80 Note that, based on the detection phase, the position and shape of Algorithm 2 Raindrop removal Algorithm 2 Raindrop removal if (default) N = 100, ωth = 0. [sent-328, score-0.283]

81 (8), the high frequency component of raindrop Ir is negligible. [sent-352, score-0.805]

82 Video Completion After restoring the partially occluding raindrop pixels, there are two remaining problems need to be completed: (1) when α is close or equal to 1. [sent-391, score-0.838]

83 (2) When there is glare, the light component from raindrop will be too strong and therefore saturated. [sent-396, score-0.855]

84 The overall algorithm of our proposed raindrop removal algorithm is shown in Algorithm 2. [sent-400, score-0.913]

85 Experiments and Applications We conducted quantitative experiments to measure the accuracy and general applicability of our proposed detection and removal method. [sent-402, score-0.153]

86 Unlike the existing methods that only detect the center and size of raindrops, our proposed method can detect raindrops with a large variety of shapes. [sent-432, score-0.442]

87 11, the synthesized raindrops were generated on a video, and became the input. [sent-440, score-0.442]

88 (2009) xperiment 1 E xperiment 2 E xperiment 3 E xperiment 4 E ar-mountedC N/A urveilancS N/A Figure 9. [sent-452, score-0.272]

89 Raindrop Removal We show a few results of removing raindrops in videos taken by a handle held camera and a vehicle-mounted camera, as shown in the first and second row of Fig. [sent-458, score-0.519]

90 To demonstrate the performance of our raindrop removal method, the manually labeled raindrops were also included. [sent-460, score-1.355]

91 Overall Evaluation The overall automatic raindrop detection and removal results in videos taken by a hand held camera and a car mounted camera are shown in the third row of Fig. [sent-461, score-1.052]

92 Conclusion We have introduced a novel method to detect and remove adherent raindrops in video. [sent-464, score-0.675]

93 The key idea of detecting raindrops is based on our theoretical findings that the motion of raindrop pixels is slower than that of non-raindrop pixels, and the temporal change of intensity of raindrop pixels is smaller than that of non-raindrop pixels. [sent-465, score-2.19]

94 The key idea on raindrop removal is to solve the blending function with the clues from detection and intensity change in a few consecutive frames, as well as to employ a video completion technique only for those that cannot be restored. [sent-466, score-1.168]

95 To our knowledge, our automatic raindrop detection and removal method is novel and can benefit many applications that suffer from adherent raindrops. [sent-467, score-1.152]

96 xperiment 1 E xperiment 2 E nmetierxp3EFigure1 . [sent-478, score-0.136]

97 Third row: the removal result with the raindrops automatically detected. [sent-485, score-0.572]

98 Realistic modeling of water droplets for monocular adherent raindrop recognition using bezier curves. [sent-569, score-1.045]

99 Noises removal from image sequences acquired with moving camera by estimating camera motion from spatio-temporal information. [sent-607, score-0.229]

100 Removal of adherent waterdrops from images acquired with stereo camera. [sent-613, score-0.216]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('raindrop', 0.783), ('raindrops', 0.442), ('adherent', 0.216), ('removal', 0.13), ('glare', 0.091), ('wexler', 0.088), ('ie', 0.074), ('light', 0.072), ('intensity', 0.069), ('xperiment', 0.068), ('roser', 0.065), ('roundness', 0.06), ('areas', 0.058), ('completion', 0.058), ('blurred', 0.057), ('kurihata', 0.057), ('rain', 0.055), ('ir', 0.052), ('environment', 0.051), ('derivative', 0.05), ('occludes', 0.047), ('contraction', 0.044), ('yamashita', 0.042), ('occluding', 0.041), ('camera', 0.039), ('blending', 0.038), ('garg', 0.035), ('rays', 0.034), ('video', 0.033), ('temporal', 0.032), ('bezier', 0.03), ('haze', 0.03), ('restore', 0.028), ('calculate', 0.027), ('circle', 0.025), ('originated', 0.024), ('weather', 0.024), ('accumulated', 0.024), ('outdoor', 0.024), ('detection', 0.023), ('repairing', 0.023), ('held', 0.023), ('adhered', 0.023), ('barnum', 0.023), ('circumference', 0.023), ('contracted', 0.023), ('protecting', 0.023), ('rainy', 0.023), ('plane', 0.022), ('frequency', 0.022), ('cyclic', 0.022), ('frames', 0.022), ('motion', 0.021), ('diameter', 0.021), ('unprocessed', 0.02), ('hara', 0.02), ('shiratori', 0.02), ('change', 0.02), ('experiment', 0.019), ('reappears', 0.019), ('kang', 0.018), ('crown', 0.018), ('remove', 0.017), ('retrieving', 0.017), ('completely', 0.017), ('ero', 0.017), ('derivatives', 0.017), ('collecting', 0.016), ('snow', 0.016), ('water', 0.016), ('area', 0.016), ('calculated', 0.016), ('refracted', 0.016), ('videos', 0.015), ('included', 0.015), ('end', 0.015), ('visibility', 0.015), ('sight', 0.015), ('emitted', 0.015), ('inter', 0.015), ('slower', 0.014), ('partially', 0.014), ('clues', 0.014), ('uu', 0.014), ('iros', 0.014), ('pixels', 0.013), ('default', 0.013), ('et', 0.013), ('blur', 0.013), ('coincides', 0.013), ('frame', 0.013), ('dx', 0.013), ('flow', 0.013), ('inpainting', 0.013), ('path', 0.013), ('jia', 0.012), ('navigation', 0.012), ('transparent', 0.012), ('situations', 0.012), ('missing', 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

Author: Shaodi You, Robby T. Tan, Rei Kawakami, Katsushi Ikeuchi

Abstract: Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatiotemporal derivatives ofraindrops. First, it detects raindrops based on the motion and the intensity temporal derivatives of the input video. Second, relying on an analysis that some areas of a raindrop completely occludes the scene, yet the remaining areas occludes only partially, the method removes the two types of areas separately. For partially occluding areas, it restores them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity change. For completely occluding areas, it recovers them by using a video completion technique. Experimental results using various real videos show the effectiveness of the proposed method.

2 0.047229372 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

3 0.043290455 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

Author: Bastian Goldluecke, Sven Wanner

Abstract: Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.

4 0.041798398 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

Author: Hee Seok Lee, Kuoung Mu Lee

Abstract: Motion blur frequently occurs in dense 3D reconstruction using a single moving camera, and it degrades the quality of the 3D reconstruction. To handle motion blur caused by rapid camera shakes, we propose a blur-aware depth reconstruction method, which utilizes a pixel correspondence that is obtained by considering the effect of motion blur. Motion blur is dependent on 3D geometry, thus parameterizing blurred appearance of images with scene depth given camera motion is possible and a depth map can be accurately estimated from the blur-considered pixel correspondence. The estimated depth is then converted intopixel-wise blur kernels, and non-uniform motion blur is easily removed with low computational cost. The obtained blur kernel is depth-dependent, thus it effectively addresses scene-depth variation, which is a challenging problem in conventional non-uniform deblurring methods.

5 0.040132597 158 cvpr-2013-Exploring Weak Stabilization for Motion Feature Extraction

Author: Dennis Park, C. Lawrence Zitnick, Deva Ramanan, Piotr Dollár

Abstract: We describe novel but simple motion features for the problem of detecting objects in video sequences. Previous approaches either compute optical flow or temporal differences on video frame pairs with various assumptions about stabilization. We describe a combined approach that uses coarse-scale flow and fine-scale temporal difference features. Our approach performs weak motion stabilization by factoring out camera motion and coarse object motion while preserving nonrigid motions that serve as useful cues for recognition. We show results for pedestrian detection and human pose estimation in video sequences, achieving state-of-the-art results in both. In particular, given a fixed detection rate our method achieves a five-fold reduction in false positives over prior art on the Caltech Pedestrian benchmark. Finally, we perform extensive diagnostic experiments to reveal what aspects of our system are crucial for good performance. Proper stabilization, long time-scale features, and proper normalization are all critical.

6 0.038547717 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

7 0.037957277 22 cvpr-2013-A Non-parametric Framework for Document Bleed-through Removal

8 0.037748173 181 cvpr-2013-Fusing Depth from Defocus and Stereo with Coded Apertures

9 0.037590701 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

10 0.035439737 357 cvpr-2013-Revisiting Depth Layers from Occlusions

11 0.034794088 330 cvpr-2013-Photometric Ambient Occlusion

12 0.033770483 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

13 0.033604607 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras

14 0.033172976 307 cvpr-2013-Non-uniform Motion Deblurring for Bilayer Scenes

15 0.033102341 187 cvpr-2013-Geometric Context from Videos

16 0.033078916 117 cvpr-2013-Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera

17 0.033003528 195 cvpr-2013-HDR Deghosting: How to Deal with Saturation?

18 0.032627407 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

19 0.032198079 332 cvpr-2013-Pixel-Level Hand Detection in Ego-centric Videos

20 0.031287521 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.071), (1, 0.058), (2, 0.005), (3, 0.002), (4, -0.03), (5, -0.005), (6, -0.011), (7, 0.002), (8, 0.002), (9, 0.028), (10, 0.005), (11, -0.003), (12, 0.043), (13, -0.016), (14, -0.014), (15, 0.015), (16, 0.017), (17, 0.004), (18, -0.015), (19, -0.001), (20, -0.012), (21, 0.034), (22, -0.019), (23, -0.003), (24, 0.001), (25, -0.03), (26, 0.01), (27, -0.022), (28, -0.005), (29, 0.041), (30, -0.004), (31, -0.026), (32, 0.012), (33, -0.018), (34, 0.034), (35, 0.013), (36, -0.002), (37, -0.002), (38, 0.0), (39, 0.001), (40, -0.041), (41, 0.013), (42, -0.053), (43, 0.029), (44, -0.014), (45, -0.017), (46, -0.065), (47, -0.04), (48, -0.009), (49, -0.01)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.83703917 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

Author: Shaodi You, Robby T. Tan, Rei Kawakami, Katsushi Ikeuchi

Abstract: Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatiotemporal derivatives ofraindrops. First, it detects raindrops based on the motion and the intensity temporal derivatives of the input video. Second, relying on an analysis that some areas of a raindrop completely occludes the scene, yet the remaining areas occludes only partially, the method removes the two types of areas separately. For partially occluding areas, it restores them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity change. For completely occluding areas, it recovers them by using a video completion technique. Experimental results using various real videos show the effectiveness of the proposed method.

2 0.58069807 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

Author: Yu Ji, Jinwei Ye, Jingyi Yu

Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LFProbe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By . ude l. edu observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat’s Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.

3 0.55858797 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

Author: Visesh Chari, Peter Sturm

Abstract: 3D reconstruction of transparent refractive objects like a plastic bottle is challenging: they lack appearance related visual cues and merely reflect and refract light from the surrounding environment. Amongst several approaches to reconstruct such objects, the seminal work of Light-Path triangulation [17] is highly popular because of its general applicability and analysis of minimal scenarios. A lightpath is defined as the piece-wise linear path taken by a ray of light as it passes from source, through the object and into the camera. Transparent refractive objects not only affect the geometric configuration of light-paths but also their radiometric properties. In this paper, we describe a method that combines both geometric and radiometric information to do reconstruction. We show two major consequences of the addition of radiometric cues to the light-path setup. Firstly, we extend the case of scenarios in which reconstruction is plausible while reducing the minimal re- quirements for a unique reconstruction. This happens as a consequence of the fact that radiometric cues add an additional known variable to the already existing system of equations. Secondly, we present a simple algorithm for reconstruction, owing to the nature of the radiometric cue. We present several synthetic experiments to validate our theories, and show high quality reconstructions in challenging scenarios.

4 0.55858356 176 cvpr-2013-Five Shades of Grey for Fast and Reliable Camera Pose Estimation

Author: Adam Herout, István Szentandrási, Michal Zachariáš, Markéta Dubská, Rudolf Kajan

Abstract: We introduce here an improved design of the Uniform Marker Fields and an algorithm for their fast and reliable detection. Our concept of the marker field is designed so that it can be detected and recognized for camera pose estimation: in various lighting conditions, under a severe perspective, while heavily occluded, and under a strong motion blur. Our marker field detection harnesses the fact that the edges within the marker field meet at two vanishing points and that the projected planar grid of squares can be defined by a detectable mathematical formalism. The modules of the grid are greyscale and the locations within the marker field are defined by the edges between the modules. The assumption that the marker field is planar allows for a very cheap and reliable camera pose estimation in the captured scene. The detection rates and accuracy are slightly better compared to state-of-the-art marker-based solutions. At the same time, and more importantly, our detector of the marker field is several times faster and the reliable real-time detection can be thus achieved on mobile and low-power devices. We show three targeted applications where theplanarity is assured and where thepresented marker field design and detection algorithm provide a reliable and extremely fast solution.

5 0.55073422 269 cvpr-2013-Light Field Distortion Feature for Transparent Object Recognition

Author: Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-Ichiro Taniguchi

Abstract: Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance ofsuch objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light?eld image as an input and model the distortion of the light ?eld caused by the refractive property of a transparent object. We propose a new feature, called the light ?eld distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

6 0.53667408 409 cvpr-2013-Spectral Modeling and Relighting of Reflective-Fluorescent Scenes

7 0.53416103 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas

8 0.52330905 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

9 0.52169913 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

10 0.52042764 413 cvpr-2013-Story-Driven Summarization for Egocentric Video

11 0.51772869 333 cvpr-2013-Plane-Based Content Preserving Warps for Video Stabilization

12 0.5071215 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

13 0.49762076 330 cvpr-2013-Photometric Ambient Occlusion

14 0.48952723 118 cvpr-2013-Detecting Pulse from Head Motions in Video

15 0.47787765 357 cvpr-2013-Revisiting Depth Layers from Occlusions

16 0.47430718 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

17 0.47328225 454 cvpr-2013-Video Enhancement of People Wearing Polarized Glasses: Darkening Reversal and Reflection Reduction

18 0.47060731 114 cvpr-2013-Depth Acquisition from Density Modulated Binary Patterns

19 0.46914563 313 cvpr-2013-Online Dominant and Anomalous Behavior Detection in Videos

20 0.46816626 90 cvpr-2013-Computing Diffeomorphic Paths for Large Motion Interpolation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.098), (16, 0.062), (24, 0.302), (26, 0.034), (28, 0.013), (33, 0.211), (67, 0.045), (69, 0.047), (87, 0.053)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.76196903 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

Author: Shaodi You, Robby T. Tan, Rei Kawakami, Katsushi Ikeuchi

Abstract: Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatiotemporal derivatives ofraindrops. First, it detects raindrops based on the motion and the intensity temporal derivatives of the input video. Second, relying on an analysis that some areas of a raindrop completely occludes the scene, yet the remaining areas occludes only partially, the method removes the two types of areas separately. For partially occluding areas, it restores them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity change. For completely occluding areas, it recovers them by using a video completion technique. Experimental results using various real videos show the effectiveness of the proposed method.

2 0.68352282 384 cvpr-2013-Segment-Tree Based Cost Aggregation for Stereo Matching

Author: Xing Mei, Xun Sun, Weiming Dong, Haitao Wang, Xiaopeng Zhang

Abstract: This paper presents a novel tree-based cost aggregation method for dense stereo matching. Instead of employing the minimum spanning tree (MST) and its variants, a new tree structure, ”Segment-Tree ”, is proposed for non-local matching cost aggregation. Conceptually, the segment-tree is constructed in a three-step process: first, the pixels are grouped into a set of segments with the reference color or intensity image; second, a tree graph is created for each segment; and in the final step, these independent segment graphs are linked to form the segment-tree structure. In practice, this tree can be efficiently built in time nearly linear to the number of the image pixels. Compared to MST where the graph connectivity is determined with local edge weights, our method introduces some ’non-local’ decision rules: the pixels in one perceptually consistent segment are more likely to share similar disparities, and therefore their connectivity within the segment should be first enforced in the tree construction process. The matching costs are then aggregated over the tree within two passes. Performance evaluation on 19 Middlebury data sets shows that the proposed method is comparable to previous state-of-the-art aggregation methods in disparity accuracy and processing speed. Furthermore, the tree structure can be refined with the estimated disparities, which leads to consistent scene segmentation and significantly better aggregation results.

3 0.67168009 173 cvpr-2013-Finding Things: Image Parsing with Regions and Per-Exemplar Detectors

Author: Joseph Tighe, Svetlana Lazebnik

Abstract: This paper presents a system for image parsing, or labeling each pixel in an image with its semantic category, aimed at achieving broad coverage across hundreds of object categories, many of them sparsely sampled. The system combines region-level features with per-exemplar sliding window detectors. Per-exemplar detectors are better suited for our parsing task than traditional bounding box detectors: they perform well on classes with little training data and high intra-class variation, and they allow object masks to be transferred into the test image for pixel-level segmentation. The proposed system achieves state-of-theart accuracy on three challenging datasets, the largest of which contains 45,676 images and 232 labels.

4 0.66726321 333 cvpr-2013-Plane-Based Content Preserving Warps for Video Stabilization

Author: Zihan Zhou, Hailin Jin, Yi Ma

Abstract: Recently, a new image deformation technique called content-preserving warping (CPW) has been successfully employed to produce the state-of-the-art video stabilization results in many challenging cases. The key insight of CPW is that the true image deformation due to viewpoint change can be well approximated by a carefully constructed warp using a set of sparsely constructed 3D points only. However, since CPW solely relies on the tracked feature points to guide the warping, it works poorly in large textureless regions, such as ground and building interiors. To overcome this limitation, in this paper we present a hybrid approach for novel view synthesis, observing that the textureless regions often correspond to large planar surfaces in the scene. Particularly, given a jittery video, we first segment each frame into piecewise planar regions as well as regions labeled as non-planar using Markov random fields. Then, a new warp is computed by estimating a single homography for regions belong to the same plane, while in- heriting results from CPW in the non-planar regions. We demonstrate how the segmentation information can be efficiently obtained and seamlessly integrated into the stabilization framework. Experimental results on a variety of real video sequences verify the effectiveness of our method.

5 0.66396356 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds

Author: Jacinto C. Nascimento, Gustavo Carneiro

Abstract: The solution for the top-down segmentation of non-rigid visual objects using machine learning techniques is generally regarded as too complex to be solved in its full generality given the large dimensionality of the search space of the explicit representation ofthe segmentation contour. In order to reduce this complexity, theproblem is usually divided into two stages: rigid detection and non-rigid segmentation. The rationale is based on the fact that the rigid detection can be run in a lower dimensionality space (i.e., less complex and faster) than the original contour space, and its result is then used to constrain the non-rigid segmentation. In this paper, we propose the use of sparse manifolds to reduce the dimensionality of the rigid detection search space of current stateof-the-art top-down segmentation methodologies. The main goals targeted by this smaller dimensionality search space are the decrease of the search running time complexity and the reduction of the training complexity of the rigid detec- tor. These goals are attainable given that both the search and training complexities are function of the dimensionality of the rigid search space. We test our approach in the segmentation of the left ventricle from ultrasound images and lips from frontal face images. Compared to the performance of state-of-the-art non-rigid segmentation system, our experiments show that the use of sparse manifolds for the rigid detection leads to the two goals mentioned above.

6 0.634184 108 cvpr-2013-Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera

7 0.63201743 104 cvpr-2013-Deep Convolutional Network Cascade for Facial Point Detection

8 0.63081318 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation

9 0.63070822 245 cvpr-2013-Layer Depth Denoising and Completion for Structured-Light RGB-D Cameras

10 0.63030112 115 cvpr-2013-Depth Super Resolution by Rigid Body Self-Similarity in 3D

11 0.63002485 271 cvpr-2013-Locally Aligned Feature Transforms across Views

12 0.63001096 61 cvpr-2013-Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics

13 0.62978864 363 cvpr-2013-Robust Multi-resolution Pedestrian Detection in Traffic Scenes

14 0.6292628 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases

15 0.62924218 221 cvpr-2013-Incorporating Structural Alternatives and Sharing into Hierarchy for Multiclass Object Recognition and Detection

16 0.62919092 54 cvpr-2013-BRDF Slices: Accurate Adaptive Anisotropic Appearance Acquisition

17 0.62918651 27 cvpr-2013-A Theory of Refractive Photo-Light-Path Triangulation

18 0.62909615 118 cvpr-2013-Detecting Pulse from Head Motions in Video

19 0.62901008 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

20 0.62867379 443 cvpr-2013-Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances