iccv iccv2013 iccv2013-207 knowledge-graph by maker-knowledge-mining

207 iccv-2013-Illuminant Chromaticity from Image Sequences


Source: pdf

Author: Veronique Prinet, Dani Lischinski, Michael Werman

Abstract: We estimate illuminant chromaticity from temporal sequences, for scenes illuminated by either one or two dominant illuminants. While there are many methods for illuminant estimation from a single image, few works so far have focused on videos, and even fewer on multiple light sources. Our aim is to leverage information provided by the temporal acquisition, where either the objects or the camera or the light source are/is in motion in order to estimate illuminant color without the need for user interaction or using strong assumptions and heuristics. We introduce a simple physically-based formulation based on the assumption that the incident light chromaticity is constant over a short space-time domain. We show that a deterministic approach is not sufficient for accurate and robust estimation: however, a probabilistic formulation makes it possible to implicitly integrate away hidden factors that have been ignored by the physical model. Experimental results are reported on a dataset of natural video sequences and on the GrayBall benchmark, indicating that we compare favorably with the state-of-the-art.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Illuminant Chromaticity from Image Sequences V ´eronique Prinet Dani Lischinski Michael Werman The Hebrew University of Jerusalem, Israel Abstract We estimate illuminant chromaticity from temporal sequences, for scenes illuminated by either one or two dominant illuminants. [sent-1, score-1.3]

2 While there are many methods for illuminant estimation from a single image, few works so far have focused on videos, and even fewer on multiple light sources. [sent-2, score-0.942]

3 Our aim is to leverage information provided by the temporal acquisition, where either the objects or the camera or the light source are/is in motion in order to estimate illuminant color without the need for user interaction or using strong assumptions and heuristics. [sent-3, score-1.031]

4 We introduce a simple physically-based formulation based on the assumption that the incident light chromaticity is constant over a short space-time domain. [sent-4, score-0.65]

5 Estimating the colors of illuminants in a scene is thus an important task in computer vision and computational photography, making it possible to white-balance an image or a video sequence, or to apply post-exposure relighting effects. [sent-9, score-0.245]

6 However, most existing color constancy and white balance methods assume that the illumination in the scene is dominated by a single illuminant color. [sent-10, score-1.087]

7 For example, in an outdoor scene the illuminant color in the sunlit areas differs significantly from the the illuminant color in the shade, a difference that becomes more apparent towards sunset. [sent-12, score-1.711]

8 Left: First frame of a sequence recorded under two light sources and corresponding illuminant colors (ESTimated and Ground Truth). [sent-14, score-1.047]

9 [7] propose a method for recovering the linear mixture coefficients at each pixel of an image, but rely on the user to provide their method with the colors of the two illuminants. [sent-19, score-0.183]

10 By using multiple images, we can formulate the problem of illuminant estimation in a well-constrained form, thus avoiding the need of any prior or additional information provided by a user, as most previous work do. [sent-20, score-0.81]

11 We show that our approach can be extended to scenes 33331203 lit by a spatially varying mixture of two different illuminants. [sent-24, score-0.174]

12 Hence, we propose an efficient framework for estimating the chromaticity vectors of both illuminants, as well as recovering their relative mixture across the scene. [sent-25, score-0.482]

13 We validate our illuminant estimation approach on existing as well as new datasets and demonstrate the ability to white-balance and relight such images. [sent-27, score-0.81]

14 The rest ofthe paper is organized as follows: after a short review ofthe state-of-the-art in color constancy, we describe a new method for estimating the illuminant color from a natural video sequence, as long as some surfaces in the scene have a specular reflectance component (Section 3). [sent-28, score-1.086]

15 We then extend this method for the completely automatic estimation of two illuminant colors from a sequence, along with the corresponding mixture coefficients, without requiring any additional input (Section 4). [sent-29, score-0.928]

16 Note that none of these approaches focus on video or temporal sequences: to our knowledge, the only work dealing with illuminant estimation in videos is based on averaging results from existing frame-wise methods [14, 19]. [sent-34, score-0.888]

17 Shafer [15] introduced the physically-based dichromatic reflection model, which decomposes the observed radiance into diffuse and specu- lar components. [sent-36, score-0.175]

18 This model has been used by a number of researchers to estimate illuminant color [10, 9, 3]. [sent-37, score-0.848]

19 [20] operate on a pair of images, simultaneously estimating illuminant chromaticity, correspondences between pairs of point, and specularities. [sent-41, score-0.794]

20 The closest work to our single illuminant estimation method (described in Section 3) is that of Yang et al. [sent-45, score-0.81]

21 [20], which is based on a heuristic that votes for discretized values of the illuminant chromaticity Γ. [sent-46, score-1.12]

22 In contrast to [20], starting from the same equations, we formulate the problem of illuminant estimation in a probabilistic manner to implicitly integrate hidden factors that have been ignored by the underlying physical model. [sent-47, score-0.849]

23 This results in a different, simple yet robust approach, making it possible to reliably estimate the global illuminant chromaticity from natural image sequences acquired under uncontrolled settings. [sent-48, score-1.252]

24 The need for multiilluminant estimation arises when different regions of a scene captured by a camera are illuminated by different light sources [16, 7, 2, 8, 6]. [sent-50, score-0.29]

25 [6], who propose estimating the incident light chromaticity locally in patches around points of interest, before estimating the two global color illuminants. [sent-52, score-0.788]

26 Conversely, our approach, which is also based on a local-to-global framework, is mathematically justified and based on inverting the compositing equation of the illuminant colors. [sent-54, score-0.786]

27 [7], who address a different but related problem: performing white balance in scenes lit by two differently colored illuminants. [sent-56, score-0.198]

28 However, this work operates on a single image assuming that the two global illuminant chromaticities are known and focuses on recovering their mixture across the image. [sent-57, score-0.983]

29 In contrast, the method we describe in Section 4 operates on a temporal sequence of images, automatically recovering both the illuminant chromaticities and the illuminant mixtures. [sent-58, score-1.743]

30 The dichromatic reflection model The dichromatic model for dielectric materials (such as plastic, acrylics, hair, etc. [sent-62, score-0.22]

31 ) expresses the light reflected from an object as a linear combination ofdiffuse (body) and specular (interface) components [15]. [sent-63, score-0.22]

32 The diffuse component has the same radiance when viewed from any angle, following Lambert’s law, while the specular component captures the directional reflection of the incident light hitting the object’s surface. [sent-64, score-0.477]

33 The specular component m(p) L has the same spectral distribution as the incident light [15, 10, 3]. [sent-66, score-0.384]

34 In this section we assume that the spectral 33331214 distribution of the illuminant is identical everywhere in the scene, making L independent of the spatial location p. [sent-67, score-0.786]

35 Illuminant chromaticity from a sequence Extending the model in eq. [sent-70, score-0.394]

36 (1) to a temporal sequence of images, assuming that that the illuminant color L does not change with time, gives: I(p, t) = D(p, t) + m(p, t) L. [sent-71, score-0.94]

37 (3) Thus, the illuminant color L = (Lr, Lg , Lb) can be derived from equations (2) and (3). [sent-75, score-0.828]

38 r, Γg , Γb) is the global incident light chromaticity vector, simply obtained by differentiating (and normalizing) the RGB irradiance components of any point p with a specular component, tracked between two consecutive frames t and t+ Δt. [sent-85, score-0.844]

39 at So far we implicitly assumed that: (1) a change in the specular reflection occurs at p from time t to time t + Δt; and (2) the displacement Δp is estimated accurately. [sent-87, score-0.165]

40 to this choice might be that pixel values at edges often con- tain a mixture of light coming from two different objects; experimentally, we found that this is not a limiting factor. [sent-95, score-0.208]

41 [20] already proposed estimating illuminant chromaticity from a pair of images using eq. [sent-100, score-1.148]

42 r,g,b} Above we made the additional assumption that the observed channels xc are mutually independent, and depend only on the corresponding illuminant channel Γc. [sent-113, score-0.887]

43 Figure 2 (bottom) illustrates the empirical distributions for the three channels | ΔΔIIc((pp,,t ))|| xc computed from the video frames (top). [sent-120, score-0.172]

44 Two light sources Until now we assumed a single illuminant whose color is constant across the image. [sent-130, score-0.979]

45 In this section we extend our approach to the common scenario where the illuminant color at each point may be modeled as a spatially varying mixture of two dominant chromaticities. [sent-131, score-0.957]

46 Examples include: illumination by a mixture of sunlight and skylight, or a mixture of artificial illumination and natural daylight. [sent-132, score-0.289]

47 [7] who proposed a method for recovering the mixture coefficients from a single image, when the two global illuminant chromaticities are known. [sent-134, score-1.003]

48 In contrast to their approach, we use a temporal sequence (with as few as 2-3 images) but recover both the two chromaticities and their spatially varying mixture. [sent-135, score-0.218]

49 We assume that the mixture is constant across small space-time patches, and consequently the combined illumi- nant chromaticity is also constant across each patch. [sent-136, score-0.43]

50 We begin by independently estimating the combined illuminant chromaticity over a set of small overlapping patches, using the method described in the previous section separately for each patch. [sent-138, score-1.148]

51 Since some of the patches might not contain enough edge points with specularities, making it impossible to obtain an estimate of the illuminant there, we use linear interpolation from neighboring patches to fill such holes. [sent-139, score-0.849]

52 We then use the resulting combined illuminant chromaticity map to estimate the spatially varying mixture coefficients and the two illuminant chromaticities, as described in the remainder of this section. [sent-140, score-2.054]

53 (2) at point varying one: L(p, t) = k1(p, t) Γ1 + the two global illumi(unknown) normalized incident global illumi(p, t) with a spatially k2(p, t) Γ2, (9) where k1 and k2 are the non-negative intensity coefficients of Γ1 and Γ2. [sent-144, score-0.236]

54 Assuming that the incident light L(p, t) is roughly constant across small space-time patches, we write: Ls = k1s Γ1 + k2s Γ2 (10) for each small space-time patch s. [sent-145, score-0.296]

55 Normalizing both sides and making use of the fact that Γ1 and Γ2 are normalized, we express the local combined incident light chromaticity as a convex combination: Γcs=||LLscs||1=||kk1s1sΓΓ11,c++ k k2s2sΓΓ2,2c||1 = αs Γ1,c + (1 − αs) Γ2,c, (11) k1sk+1sk2s, for c where αs = ∈ {r,g,b}. [sent-146, score-0.65]

56 Video dataset recorded under normal lighting conditions using a single illuminant: the first frames of six of the sequences. [sent-159, score-0.189]

57 We used some off-the-shelf functions with the following settings: • Illuminant chromaticity estimation is performed in linear mRiGnBan, assuming gamma mofa t2io. [sent-167, score-0.419]

58 Datasets and experimental settings • We evaluate the performance of single illuminant estimation on two datasets: a newly created dataset of 13 video sequences and the GrayBall database [1]. [sent-184, score-0.896]

59 To validate the twoilluminant estimation approach, we recorded three video sequences of scenes lit by two light sources. [sent-185, score-0.377]

60 A flat grey card with spectrally uniform reflectance was placed in each scene, appearing in each video sequence for a few seconds. [sent-198, score-0.206]

61 The ground truth illuminant was estimated, for each sequence individually, using the grey card. [sent-202, score-0.875]

62 For each se- quence, we also computed the variance σc and mean angular variation β of the grey card RGB vectors to ensure that the scene complies with a constant illumination assumption (0. [sent-204, score-0.22]

63 5): two incandescent lamps (blue and red), sun and skylight, incandescent lamp and natural daylight. [sent-210, score-0.202]

64 Two grey cards were placed in the scene during acquisition, ensuring that each grey card is illuminated by only one of the illuminants. [sent-211, score-0.284]

65 A small grey sphere was mounted onto the video camera, appearing in all the images, and used to estimate a per-frame ground truth illuminant chromaticity. [sent-217, score-0.882]

66 Note that, in the GrayBall database, the illuminant varies slightly from frame to frame and therefore violates our assumption of uniform illu2http://www. [sent-221, score-0.785]

67 The reported angular error (in degrees) is averaged over the nine video sequences recorded with normal lighting conditions. [sent-235, score-0.246]

68 Results are reported in terms of the angular deviation β between the ground truth Γg and the estimated illuminant Γˆ, in camera sensor basis: β = arccos(||ΓΓˆg·|Γ| |g|Γˆ||). [sent-243, score-0.83]

69 Single illuminant estimation We begin with an experimental validation of the claims made in Section 3. [sent-246, score-0.81]

70 2 regarding the choice to use edge points for illuminant estimation and the use of the Laplace distribution to model P(xc |Γc). [sent-247, score-0.833]

71 Table 1 compares between three different strategies Γfor choosing the specular points: choosing from points detected by the Canny edge detector, choosing from points next to edges, and choosing from the entire image. [sent-248, score-0.191]

72 Tables 2 and 3 report illuminant estimation accuracy for the sequences recorded under normal illumination conditions (Fig. [sent-255, score-0.988]

73 The IIC method was chosen because it is a popular reference among color constancy approaches based on a physical model. [sent-269, score-0.157]

74 All these approaches estimate a per-frame illuminant; we average the illuminant chromaticity vector computed for each frame, and report the angular error between the mean chromaticity vector and the ground truth [19]. [sent-271, score-1.537]

75 GGM does not perform well in extreme light conditions, because the very limited range of color visible in the input frames does not enable a good matching with the prior gamut used by this algorithm. [sent-275, score-0.318]

76 (b) Outdoor scene lit by sunlight (Γ1) and skylight (Γ2). [sent-303, score-0.178]

77 Two illuminant estimation Figure 1 shows the estimated incident light color map {Γs}sS=1 , the light mixture coefficients {αs}sS=1 , and the e{sΓtim}ated, light chromaticity, computed sf {roαm sequence (a). [sent-313, score-1.61]

78 During recording, the scene was illuminated by a red light from the back on the right side and a blue light from the front on the left side. [sent-314, score-0.384]

79 The incident color map (middle) clearly captures the pattern of these two dominant light sources. [sent-315, score-0.38]

80 The mixture coefficients map (right) indicates the relative contribution of one illuminant with respect to the other, interpolated across the image. [sent-316, score-0.883]

81 First frames of three video sequences (top) and estimated illuminant colors (bottom). [sent-331, score-0.968]

82 This makes the estimation of strongly colored illuminants (e. [sent-337, score-0.192]

83 Motion between frames originates from camera displacement (right), object/human motion (middle), or light source motion (left). [sent-341, score-0.185]

84 Color patches in the bottom row show the two estimated illuminant colors for each sequence. [sent-342, score-0.849]

85 Application to white balance correction The aim of white balance correction is to remove the color cast introduced by a non-white illuminant: i. [sent-346, score-0.352]

86 Figure 7 demonstrates the result of applying white balance to a scene illuminated by a mixture of (warmer colored) late afternoon sunlight and (cooler colored) skylight. [sent-349, score-0.366]

87 Having estimated the incident light color Γs across the image, we simply perform the white balance separately at each pixel, instead of globally for the entire image, producing the result shown in Fig33331269 Figure 7. [sent-351, score-0.486]

88 (a) Input frame of a scene illuminated by afternoon sunlight and skylight. [sent-353, score-0.183]

89 (b) Result of spatially variant white balance correction after two illuminant estimation. [sent-354, score-0.942]

90 (c) “Relighting” by changing the chromaticity of one of the illuminants. [sent-355, score-0.354]

91 (d) For comparison: uniform white balance correction using a single estimated illuminant. [sent-356, score-0.185]

92 A global white balance correction (using a single estimated illuminant) is shown in Figure 7(d) for comparison, suffering from a stronger remaining greenish color cast. [sent-358, score-0.228]

93 This is demonstrated in Figure 7(c), where the illuminant corresponding to the sunlight was changed to a more reddish color. [sent-360, score-0.85]

94 Conclusion The ease with which one can acquire temporal sequences using commercial cameras and the ubiquity of videos on the web, makes natural the exploitation of temporal information for various image processing tasks. [sent-362, score-0.185]

95 In this work, we presented an effective way to leverage temporal dependencies between frames to tackle the problem of illuminant estimation from a video sequence. [sent-363, score-0.941]

96 Our approach is simply extended to scenes lit by two global illuminants, whenever the incident light chromaticity at each point of the scene can be modeled by a mixture of the two illuminant colors. [sent-366, score-1.588]

97 We show that on several datasets, our results in general are comparable or improve upon the state-of-the-art both for single illuminant estimation and for two-illuminant estimation. [sent-367, score-0.81]

98 Solving for colour constancy using a constrained dichromatic reflection model. [sent-385, score-0.233]

99 Estimation of multiple illuminants based on specular highlights detection. [sent-421, score-0.212]

100 Method for computing the scene-illuminant chromaticity from specular highlights. [sent-435, score-0.442]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('illuminant', 0.766), ('chromaticity', 0.354), ('incident', 0.164), ('light', 0.132), ('grayball', 0.124), ('illuminants', 0.124), ('chromaticities', 0.096), ('constancy', 0.095), ('xc', 0.092), ('specular', 0.088), ('dichromatic', 0.082), ('iic', 0.077), ('mixture', 0.076), ('laplace', 0.072), ('grey', 0.069), ('illuminated', 0.066), ('color', 0.062), ('ggm', 0.062), ('incandescent', 0.062), ('sequences', 0.059), ('sunlight', 0.057), ('balance', 0.056), ('reflection', 0.056), ('frames', 0.053), ('white', 0.051), ('card', 0.051), ('temporal', 0.051), ('recorded', 0.048), ('shafer', 0.046), ('skylight', 0.046), ('lit', 0.046), ('lighting', 0.045), ('hsu', 0.045), ('estimation', 0.044), ('angular', 0.043), ('specularities', 0.042), ('colors', 0.042), ('coefficients', 0.041), ('gamut', 0.041), ('gijsenij', 0.041), ('sequence', 0.04), ('correction', 0.038), ('diffuse', 0.037), ('afternoon', 0.031), ('ciurea', 0.031), ('grayworld', 0.031), ('illumi', 0.031), ('lamps', 0.031), ('thirds', 0.031), ('spatially', 0.031), ('extreme', 0.03), ('scene', 0.029), ('consecutive', 0.029), ('uncontrolled', 0.029), ('channel', 0.029), ('estimating', 0.028), ('illumination', 0.028), ('reddish', 0.027), ('composited', 0.027), ('video', 0.027), ('indoor', 0.027), ('outdoor', 0.026), ('bc', 0.026), ('funt', 0.025), ('side', 0.025), ('recovering', 0.024), ('gevers', 0.024), ('natural', 0.024), ('irradiance', 0.024), ('normal', 0.024), ('colored', 0.024), ('edge', 0.023), ('ic', 0.023), ('rgb', 0.023), ('lamp', 0.023), ('relighting', 0.023), ('experimentally', 0.022), ('dominant', 0.022), ('scenes', 0.021), ('assuming', 0.021), ('estimated', 0.021), ('normalizing', 0.021), ('factors', 0.02), ('estimate', 0.02), ('compositing', 0.02), ('lg', 0.02), ('choosing', 0.02), ('patches', 0.02), ('acquisition', 0.02), ('everywhere', 0.02), ('israel', 0.02), ('lischinski', 0.019), ('matting', 0.019), ('lb', 0.019), ('sources', 0.019), ('cs', 0.019), ('uniform', 0.019), ('conditions', 0.019), ('ignored', 0.019), ('canny', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 207 iccv-2013-Illuminant Chromaticity from Image Sequences

Author: Veronique Prinet, Dani Lischinski, Michael Werman

Abstract: We estimate illuminant chromaticity from temporal sequences, for scenes illuminated by either one or two dominant illuminants. While there are many methods for illuminant estimation from a single image, few works so far have focused on videos, and even fewer on multiple light sources. Our aim is to leverage information provided by the temporal acquisition, where either the objects or the camera or the light source are/is in motion in order to estimate illuminant color without the need for user interaction or using strong assumptions and heuristics. We introduce a simple physically-based formulation based on the assumption that the incident light chromaticity is constant over a short space-time domain. We show that a deterministic approach is not sufficient for accurate and robust estimation: however, a probabilistic formulation makes it possible to implicitly integrate away hidden factors that have been ignored by the physical model. Experimental results are reported on a dataset of natural video sequences and on the GrayBall benchmark, indicating that we compare favorably with the state-of-the-art.

2 0.37892234 5 iccv-2013-A Color Constancy Model with Double-Opponency Mechanisms

Author: Shaobing Gao, Kaifu Yang, Chaoyi Li, Yongjie Li

Abstract: The double-opponent color-sensitive cells in the primary visual cortex (V1) of the human visual system (HVS) have long been recognized as the physiological basis of color constancy. We introduce a new color constancy model by imitating the functional properties of the HVS from the retina to the double-opponent cells in V1. The idea behind the model originates from the observation that the color distribution of the responses of double-opponent cells to the input color-biased images coincides well with the light source direction. Then the true illuminant color of a scene is easily estimated by searching for the maxima of the separate RGB channels of the responses of double-opponent cells in the RGB space. Our systematical experimental evaluations on two commonly used image datasets show that the proposed model can produce competitive results in comparison to the complex state-of-the-art approaches, but with a simple implementation and without the need for training.

3 0.10652246 30 iccv-2013-A Simple Model for Intrinsic Image Decomposition with Depth Cues

Author: Qifeng Chen, Vladlen Koltun

Abstract: We present a model for intrinsic decomposition of RGB-D images. Our approach analyzes a single RGB-D image and estimates albedo and shading fields that explain the input. To disambiguate the problem, our model estimates a number of components that jointly account for the reconstructed shading. By decomposing the shading field, we can build in assumptions about image formation that help distinguish reflectance variation from shading. These assumptions are expressed as simple nonlocal regularizers. We evaluate the model on real-world images and on a challenging synthetic dataset. The experimental results demonstrate that the presented approach outperforms prior models for intrinsic decomposition of RGB-D images.

4 0.10546799 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects

Author: Michael Weinmann, Aljosa Osep, Roland Ruiters, Reinhard Klein

Abstract: In this paper, we present a novel, robust multi-view normal field integration technique for reconstructing the full 3D shape of mirroring objects. We employ a turntablebased setup with several cameras and displays. These are used to display illumination patterns which are reflected by the object surface. The pattern information observed in the cameras enables the calculation of individual volumetric normal fields for each combination of camera, display and turntable angle. As the pattern information might be blurred depending on the surface curvature or due to nonperfect mirroring surface characteristics, we locally adapt the decoding to the finest still resolvable pattern resolution. In complex real-world scenarios, the normal fields contain regions without observations due to occlusions and outliers due to interreflections and noise. Therefore, a robust reconstruction using only normal information is challenging. Via a non-parametric clustering of normal hypotheses derived for each point in the scene, we obtain both the most likely local surface normal and a local surface consistency estimate. This information is utilized in an iterative mincut based variational approach to reconstruct the surface geometry.

5 0.099459194 82 iccv-2013-Compensating for Motion during Direct-Global Separation

Author: Supreeth Achar, Stephen T. Nuske, Srinivasa G. Narasimhan

Abstract: Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to beperformed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.

6 0.09499608 421 iccv-2013-Total Variation Regularization for Functions with Values in a Manifold

7 0.090677992 385 iccv-2013-Separating Reflective and Fluorescent Components Using High Frequency Illumination in the Spectral Domain

8 0.087738693 343 iccv-2013-Real-World Normal Map Capture for Nearly Flat Reflective Surfaces

9 0.084366053 405 iccv-2013-Structured Light in Sunlight

10 0.080712728 199 iccv-2013-High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination

11 0.063890941 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal

12 0.060908452 262 iccv-2013-Matching Dry to Wet Materials

13 0.057819821 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling

14 0.048401464 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction

15 0.045953732 300 iccv-2013-Optical Flow via Locally Adaptive Fusion of Complementary Data Costs

16 0.044075109 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects

17 0.043024659 160 iccv-2013-Fast Object Segmentation in Unconstrained Video

18 0.042946685 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes

19 0.042797655 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

20 0.040270269 419 iccv-2013-To Aggregate or Not to aggregate: Selective Match Kernels for Image Search


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.114), (1, -0.066), (2, -0.01), (3, 0.027), (4, -0.012), (5, 0.013), (6, -0.002), (7, -0.043), (8, 0.012), (9, -0.016), (10, -0.012), (11, -0.01), (12, 0.039), (13, 0.011), (14, -0.021), (15, -0.057), (16, -0.047), (17, 0.011), (18, 0.021), (19, 0.024), (20, 0.019), (21, -0.011), (22, 0.052), (23, -0.107), (24, -0.157), (25, 0.12), (26, 0.018), (27, -0.045), (28, 0.12), (29, -0.135), (30, 0.129), (31, 0.027), (32, 0.054), (33, 0.098), (34, 0.072), (35, 0.114), (36, -0.056), (37, -0.152), (38, -0.142), (39, 0.078), (40, -0.029), (41, 0.028), (42, 0.055), (43, -0.02), (44, 0.035), (45, -0.031), (46, 0.051), (47, -0.017), (48, -0.094), (49, 0.033)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92088974 207 iccv-2013-Illuminant Chromaticity from Image Sequences

Author: Veronique Prinet, Dani Lischinski, Michael Werman

Abstract: We estimate illuminant chromaticity from temporal sequences, for scenes illuminated by either one or two dominant illuminants. While there are many methods for illuminant estimation from a single image, few works so far have focused on videos, and even fewer on multiple light sources. Our aim is to leverage information provided by the temporal acquisition, where either the objects or the camera or the light source are/is in motion in order to estimate illuminant color without the need for user interaction or using strong assumptions and heuristics. We introduce a simple physically-based formulation based on the assumption that the incident light chromaticity is constant over a short space-time domain. We show that a deterministic approach is not sufficient for accurate and robust estimation: however, a probabilistic formulation makes it possible to implicitly integrate away hidden factors that have been ignored by the physical model. Experimental results are reported on a dataset of natural video sequences and on the GrayBall benchmark, indicating that we compare favorably with the state-of-the-art.

2 0.89709353 385 iccv-2013-Separating Reflective and Fluorescent Components Using High Frequency Illumination in the Spectral Domain

Author: Ying Fu, Antony Lam, Imari Sato, Takahiro Okabe, Yoichi Sato

Abstract: Hyperspectral imaging is beneficial to many applications but current methods do not consider fluorescent effects which are present in everyday items ranging from paper, to clothing, to even our food. Furthermore, everyday fluorescent items exhibit a mix of reflectance and fluorescence. So proper separation of these components is necessary for analyzing them. In this paper, we demonstrate efficient separation and recovery of reflective and fluorescent emission spectra through the use of high frequency illumination in the spectral domain. With the obtained fluorescent emission spectra from our high frequency illuminants, we then present to our knowledge, the first method for estimating the fluorescent absorption spectrum of a material given its emission spectrum. Conventional bispectral measurement of absorption and emission spectra needs to examine all combinations of incident and observed light wavelengths. In contrast, our method requires only two hyperspectral images. The effectiveness of our proposed methods are then evaluated through a combination of simulation and real experiments. We also demonstrate an application of our method to synthetic relighting of real scenes.

3 0.88157475 5 iccv-2013-A Color Constancy Model with Double-Opponency Mechanisms

Author: Shaobing Gao, Kaifu Yang, Chaoyi Li, Yongjie Li

Abstract: The double-opponent color-sensitive cells in the primary visual cortex (V1) of the human visual system (HVS) have long been recognized as the physiological basis of color constancy. We introduce a new color constancy model by imitating the functional properties of the HVS from the retina to the double-opponent cells in V1. The idea behind the model originates from the observation that the color distribution of the responses of double-opponent cells to the input color-biased images coincides well with the light source direction. Then the true illuminant color of a scene is easily estimated by searching for the maxima of the separate RGB channels of the responses of double-opponent cells in the RGB space. Our systematical experimental evaluations on two commonly used image datasets show that the proposed model can produce competitive results in comparison to the complex state-of-the-art approaches, but with a simple implementation and without the need for training.

4 0.80567396 405 iccv-2013-Structured Light in Sunlight

Author: Mohit Gupta, Qi Yin, Shree K. Nayar

Abstract: Strong ambient illumination severely degrades the performance of structured light based techniques. This is especially true in outdoor scenarios, where the structured light sources have to compete with sunlight, whose power is often 2-5 orders of magnitude larger than the projected light. In this paper, we propose the concept of light-concentration to overcome strong ambient illumination. Our key observation is that given a fixed light (power) budget, it is always better to allocate it sequentially in several portions of the scene, as compared to spreading it over the entire scene at once. For a desired level of accuracy, we show that by distributing light appropriately, the proposed approach requires 1-2 orders lower acquisition time than existing approaches. Our approach is illumination-adaptive as the optimal light distribution is determined based on a measurement of the ambient illumination level. Since current light sources have a fixed light distribution, we have built a prototype light source that supports flexible light distribution by controlling the scanning speed of a laser scanner. We show several high quality 3D scanning results in a wide range of outdoor scenarios. The proposed approach will benefit 3D vision systems that need to operate outdoors under extreme ambient illumination levels on a limited time and power budget.

5 0.72286421 30 iccv-2013-A Simple Model for Intrinsic Image Decomposition with Depth Cues

Author: Qifeng Chen, Vladlen Koltun

Abstract: We present a model for intrinsic decomposition of RGB-D images. Our approach analyzes a single RGB-D image and estimates albedo and shading fields that explain the input. To disambiguate the problem, our model estimates a number of components that jointly account for the reconstructed shading. By decomposing the shading field, we can build in assumptions about image formation that help distinguish reflectance variation from shading. These assumptions are expressed as simple nonlocal regularizers. We evaluate the model on real-world images and on a challenging synthetic dataset. The experimental results demonstrate that the presented approach outperforms prior models for intrinsic decomposition of RGB-D images.

6 0.71326017 262 iccv-2013-Matching Dry to Wet Materials

7 0.64384884 82 iccv-2013-Compensating for Motion during Direct-Global Separation

8 0.59474778 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects

9 0.55850917 407 iccv-2013-Subpixel Scanning Invariant to Indirect Lighting Using Quadratic Code Length

10 0.53399718 271 iccv-2013-Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction

11 0.52094799 199 iccv-2013-High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination

12 0.46052262 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal

13 0.45321622 135 iccv-2013-Efficient Image Dehazing with Boundary Constraint and Contextual Regularization

14 0.43500912 423 iccv-2013-Towards Motion Aware Light Field Video for Dynamic Scenes

15 0.4246617 98 iccv-2013-Cross-Field Joint Image Restoration via Scale Map

16 0.42196241 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects

17 0.41782355 145 iccv-2013-Estimating the Material Properties of Fabric from Video

18 0.40500703 89 iccv-2013-Constructing Adaptive Complex Cells for Robust Visual Tracking

19 0.40398118 388 iccv-2013-Shape Index Descriptors Applied to Texture-Based Galaxy Analysis

20 0.38187507 128 iccv-2013-Dynamic Probabilistic Volumetric Models


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.047), (7, 0.016), (21, 0.024), (26, 0.079), (27, 0.011), (31, 0.048), (40, 0.03), (42, 0.094), (48, 0.297), (64, 0.046), (73, 0.042), (89, 0.155)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.89243269 331 iccv-2013-Pyramid Coding for Functional Scene Element Recognition in Video Scenes

Author: Eran Swears, Anthony Hoogs, Kim Boyer

Abstract: Recognizing functional scene elemeents in video scenes based on the behaviors of moving objects that interact with them is an emerging problem ooff interest. Existing approaches have a limited ability to chharacterize elements such as cross-walks, intersections, andd buildings that have low activity, are multi-modal, or havee indirect evidence. Our approach recognizes the low activvity and multi-model elements (crosswalks/intersections) by introducing a hierarchy of descriptive clusters to fform a pyramid of codebooks that is sparse in the numbber of clusters and dense in content. The incorporation oof local behavioral context such as person-enter-building aand vehicle-parking nearby enables the detection of elemennts that do not have direct motion-based evidence, e.g. buuildings. These two contributions significantly improvee scene element recognition when compared against thhree state-of-the-art approaches. Results are shown on tyypical ground level surveillance video and for the first time on the more complex Wide Area Motion Imagery.

2 0.86494291 311 iccv-2013-Pedestrian Parsing via Deep Decompositional Network

Author: Ping Luo, Xiaogang Wang, Xiaoou Tang

Abstract: We propose a new Deep Decompositional Network (DDN) for parsing pedestrian images into semantic regions, such as hair, head, body, arms, and legs, where the pedestrians can be heavily occluded. Unlike existing methods based on template matching or Bayesian inference, our approach directly maps low-level visual features to the label maps of body parts with DDN, which is able to accurately estimate complex pose variations with good robustness to occlusions and background clutters. DDN jointly estimates occluded regions and segments body parts by stacking three types of hidden layers: occlusion estimation layers, completion layers, and decomposition layers. The occlusion estimation layers estimate a binary mask, indicating which part of a pedestrian is invisible. The completion layers synthesize low-level features of the invisible part from the original features and the occlusion mask. The decomposition layers directly transform the synthesized visual features to label maps. We devise a new strategy to pre-train these hidden layers, and then fine-tune the entire network using the stochastic gradient descent. Experimental results show that our approach achieves better segmentation accuracy than the state-of-the-art methods on pedestrian images with or without occlusions. Another important contribution of this paper is that it provides a large scale benchmark human parsing dataset1 that includes 3, 673 annotated samples collected from 171 surveillance videos. It is 20 times larger than existing public datasets.

3 0.83503258 63 iccv-2013-Bounded Labeling Function for Global Segmentation of Multi-part Objects with Geometric Constraints

Author: Masoud S. Nosrati, Shawn Andrews, Ghassan Hamarneh

Abstract: The inclusion of shape and appearance priors have proven useful for obtaining more accurate and plausible segmentations, especially for complex objects with multiple parts. In this paper, we augment the popular MumfordShah model to incorporate two important geometrical constraints, termed containment and detachment, between different regions with a specified minimum distance between their boundaries. Our method is able to handle multiple instances of multi-part objects defined by these geometrical hamarneh} @ s fu . ca (a)Standar laΩb ehlingΩfuhnctionseting(Ωb)hΩOuirseΩtijng Figure 1: The inside vs. outside ambiguity in (a) is resolved by our containment constraint in (b). constraints using a single labeling function while maintaining global optimality. We demonstrate the utility and advantages of these two constraints and show that the proposed convex continuous method is superior to other state-of-theart methods, including its discrete counterpart, in terms of memory usage, and metrication errors.

4 0.78903824 320 iccv-2013-Pose-Configurable Generic Tracking of Elongated Objects

Author: Daniel Wesierski, Patrick Horain

Abstract: Elongated objects have various shapes and can shift, rotate, change scale, and be rigid or deform by flexing, articulating, and vibrating, with examples as varied as a glass bottle, a robotic arm, a surgical suture, a finger pair, a tram, and a guitar string. This generally makes tracking of poses of elongated objects very challenging. We describe a unified, configurable framework for tracking the pose of elongated objects, which move in the image plane and extend over the image region. Our method strives for simplicity, versatility, and efficiency. The object is decomposed into a chained assembly of segments of multiple parts that are arranged under a hierarchy of tailored spatio-temporal constraints. In this hierarchy, segments can rescale independently while their elasticity is controlled with global orientations and local distances. While the trend in tracking is to design complex, structure-free algorithms that update object appearance on- line, we show that our tracker, with the novel but remarkably simple, structured organization of parts with constant appearance, reaches or improves state-of-the-art performance. Most importantly, our model can be easily configured to track exact pose of arbitrary, elongated objects in the image plane. The tracker can run up to 100 fps on a desktop PC, yet the computation time scales linearly with the number of object parts. To our knowledge, this is the first approach to generic tracking of elongated objects.

5 0.77335823 354 iccv-2013-Robust Dictionary Learning by Error Source Decomposition

Author: Zhuoyuan Chen, Ying Wu

Abstract: Sparsity models have recently shown great promise in many vision tasks. Using a learned dictionary in sparsity models can in general outperform predefined bases in clean data. In practice, both training and testing data may be corrupted and contain noises and outliers. Although recent studies attempted to cope with corrupted data and achieved encouraging results in testing phase, how to handle corruption in training phase still remains a very difficult problem. In contrast to most existing methods that learn the dictionaryfrom clean data, this paper is targeted at handling corruptions and outliers in training data for dictionary learning. We propose a general method to decompose the reconstructive residual into two components: a non-sparse component for small universal noises and a sparse component for large outliers, respectively. In addition, , further analysis reveals the connection between our approach and the “partial” dictionary learning approach, updating only part of the prototypes (or informative codewords) with remaining (or noisy codewords) fixed. Experiments on synthetic data as well as real applications have shown satisfactory per- formance of this new robust dictionary learning approach.

same-paper 6 0.75735641 207 iccv-2013-Illuminant Chromaticity from Image Sequences

7 0.67306972 220 iccv-2013-Joint Deep Learning for Pedestrian Detection

8 0.65182889 279 iccv-2013-Multi-stage Contextual Deep Learning for Pedestrian Detection

9 0.63921648 7 iccv-2013-A Deep Sum-Product Architecture for Robust Facial Attributes Analysis

10 0.63679528 206 iccv-2013-Hybrid Deep Learning for Face Verification

11 0.63064134 5 iccv-2013-A Color Constancy Model with Double-Opponency Mechanisms

12 0.62914193 106 iccv-2013-Deep Learning Identity-Preserving Face Space

13 0.62048358 208 iccv-2013-Image Co-segmentation via Consistent Functional Maps

14 0.61420894 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal

15 0.61331689 351 iccv-2013-Restoring an Image Taken through a Window Covered with Dirt or Rain

16 0.61227095 313 iccv-2013-Person Re-identification by Salience Matching

17 0.61114752 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error

18 0.61053431 61 iccv-2013-Beyond Hard Negative Mining: Efficient Detector Learning via Block-Circulant Decomposition

19 0.60503107 364 iccv-2013-SGTD: Structure Gradient and Texture Decorrelating Regularization for Image Decomposition

20 0.60495818 270 iccv-2013-Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking