iccv iccv2013 iccv2013-215 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Kuan-Chuan Peng, Tsuhan Chen
Abstract: Most sky models only describe the cloudiness ofthe overall sky by a single category or parameter such as sky index, which does not account for the distribution of the clouds across the sky. To capture variable cloudiness, we extend the concept of sky index to a random field indicating the level of cloudiness of each sky pixel in our proposed sky representation based on the Igawa sky model. We formulate the problem of solving the sky index of every sky pixel as a labeling problem, where an approximate solution can be efficiently found. Experimental results show that our proposed sky model has better expressiveness, stability with respect to variation in camera parameters, and geo-location estimation in outdoor images compared to the uniform sky index model. Potential applications of our proposed sky model include sky image rendering, where sky images can be generated with an arbitrary cloud distribution at any time and any location, previously impossible with traditional sky models.
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract Most sky models only describe the cloudiness ofthe overall sky by a single category or parameter such as sky index, which does not account for the distribution of the clouds across the sky. [sent-2, score-2.896]
2 To capture variable cloudiness, we extend the concept of sky index to a random field indicating the level of cloudiness of each sky pixel in our proposed sky representation based on the Igawa sky model. [sent-3, score-3.921]
3 We formulate the problem of solving the sky index of every sky pixel as a labeling problem, where an approximate solution can be efficiently found. [sent-4, score-2.01]
4 Experimental results show that our proposed sky model has better expressiveness, stability with respect to variation in camera parameters, and geo-location estimation in outdoor images compared to the uniform sky index model. [sent-5, score-2.103]
5 Potential applications of our proposed sky model include sky image rendering, where sky images can be generated with an arbitrary cloud distribution at any time and any location, previously impossible with traditional sky models. [sent-6, score-3.858]
6 Ideally, a sky model should consider different weather conditions, the cloud distribution, the scattering of the sunlight, and so on. [sent-11, score-1.129]
7 In this paper, we propose a sky model that incorporates a cloud distribution, which is a step towards this ideal sky model. [sent-13, score-1.99]
8 In recent decades, researchers in atmospheric science and related fields have proposed different sky models to fit the measured luminance or radiance of the sky. [sent-14, score-0.976]
9 One major class of those models classifies the sky into one of the several predefined categories from clear to overcast, including the Perez sky model [15] and the CIE standard sky model Figure 1. [sent-15, score-2.819]
10 The lower image is its sky index image in our model, where more brightness indicates less cloud density. [sent-18, score-1.189]
11 Those sky models limit the types of appearance of the sky, and the concept of cloudiness in those models is a discrete-level general representation for the overall sky. [sent-20, score-1.001]
12 [11, 12] estimate camera parameters and natural illumination conditions, and geo-locate outdoor images by the sky appearance as well as the detected sun position. [sent-24, score-0.995]
13 [9] also estimate the camera parameters with the luminance of the sky by normalized cross correlation. [sent-26, score-0.989]
14 The reasons are twofold: First, using sky images with clouds requires additional complexity in algorithm design and implementation. [sent-28, score-0.982]
15 Second, existing sky models encourage researchers to use sky images where the clouds are uniform, making clear sky images the easy choice. [sent-29, score-2.862]
16 Sample sky maps generated by the Igawa sky model. [sent-33, score-1.851]
17 Even though using some of the cloud models can produce realistic sky images, most of these cloud models do not take the geo-location and timestamp into consideration. [sent-40, score-1.214]
18 In this paper, we propose a sky representation based on the Igawa sky model [7], which is shown to fit better to the real measured data than other existing sky models, including models proposed by Perez [15], Brunger [2], Harrison [4], and Kittler [10]. [sent-42, score-2.777]
19 In the Igawa sky model, a sky index is introduced as a model parameter describing the cloudiness of the overall sky. [sent-43, score-2.056]
20 We extend the concept of sky index to every sky pixel location to capture the cloud distribution. [sent-44, score-2.147]
21 The proposed sky representation is demonstrated in Figure 1, where the sky indices are normalized such that the whiter the pixel in the sky index image is, the clearer that sky pixel is. [sent-45, score-3.948]
22 [12] also estimate the clouds and sky turbidity by solving the weight assigned to each pixel, where the weight is not directly linked to some physical model and a data-driven prior model is needed for clear skies. [sent-47, score-1.043]
23 Our model only uses an image and the Igawa sky model to estimate the sky indices which have direct physical interpretation [7]. [sent-48, score-1.934]
24 We make the following contributions: 1: we extend the uniform sky index model on having a per-pixel sky index that accurately represents cloud distributions. [sent-52, score-2.32]
25 2: we show applications of our sky index map for sky re-rendering and geo-localization from a single image of the sky. [sent-53, score-1.976]
26 The concept of sky index in the Igawa sky model is only defined globally for the entire sky, not for any particular pixel. [sent-58, score-2.004]
27 In our algorithm, we extend the concept of sky index to every sky pixel to generate one sky index per pixel. [sent-59, score-3.068]
28 In this paper, we use the term “sky maps” for the simulated sky images generated by the Igawa sky model and use ζ them to estimate the camera parameters (zenith, azimuth, and focal length) when solving for sky indices. [sent-60, score-2.802]
29 Figure 2 demonstrates the sample sky maps under the condition that solar azimuth equals 90 degrees and solar altitude equals 30 degrees with various Si. [sent-61, score-1.279]
30 Problem formulation Our goal is to find the sky indices of all the sky pixels that best reproduce the sky image. [sent-63, score-2.829]
31 Consider a random field of sky indices SI defined over the set of n sky pixels S and a neighborhood system N. [sent-65, score-1.908]
32 Each sky pixel si ∈ S has a random variable SIi ∈ SI, indicating its sky in∈de Sx value. [sent-66, score-1.943]
33 The unary term ψi ensures the sky index of each sky pixel is consistent with the observed data I under the Igawa sky (si) model. [sent-75, score-2.929]
34 The binary term ψij promotes sky index smoothness by encouraging neighboring sky pixels to take similar sky indices. [sent-76, score-2.914]
35 The flowchart of the algorithm solving the sky indices in our proposed model for all the sky pixels. [sent-80, score-1.928]
36 Calculating the sky indices To solve the sky index for each sky pixel, we propose an algorithm shown in Figure 3. [sent-82, score-2.946]
37 We hypothesize a set of sky images that are used to initialize the sky index, and we perform inference to optimize Eq. [sent-83, score-1.863]
38 Given a geo-located input image with timestamp, we first compute the exact solar zenith θs and solar azimuth φs by [16] and feed the sun orientation and the timestamp as the input of the Igawa sky model to generate a series of sky maps under m levels of sky indices l1, l2, · · · , lm. [sent-85, score-3.203]
39 Second, assuming that the camera parameters are n··o·t given as input and that the camera has no roll angle, we need to estimate camera zenith θc, camera azimuth φc, and focal length f while solving the sky indices for all sky pixels. [sent-86, score-2.048]
40 For each hypothesis, we calculate the normalized cross correlation value NCC (si, lj) between the image patch around si and the corresponding patch in the sky map using lj as sky index for every sky pixel si and all possible lj ∈ L. [sent-88, score-3.155]
41 ∈S The sky index SIi for each Si is initialized as the value lj ∈ L that maximizes g (S, SI). [sent-93, score-1.096]
42 Given θs, φs, θc, φc, f, and SI, we reconstruct the sky image by retrieving the corresponding intensities in sky maps (Figure 4). [sent-96, score-1.863]
43 The last term corresponds to the reconstruction error, where Ir (si) is the normalized intensity of si in the reconstructed sky image generated by the Igawa sky model with current SI, and In (si) is the normalized intensity of si in the input sky image. [sent-99, score-3.051]
44 The flowchart of the algorithm reconstructing the sky image from the corresponding sky index image. [sent-101, score-1.999]
45 The sky index images and the corresponding reconstructed sky images using different numbers of discrete sky index levels m with the same input sky image as that in Figure 1. [sent-119, score-4.03]
46 In each case of m, the top image is the solved sky index image where brighter pixels indicate clearer sky, and the bottom one is the reconstructed sky image where brighter pixels represent higher intensity. [sent-120, score-2.121]
47 reconstructed sky images generated by the flow in Figure 4 with the uniform sky index model and our proposed model respectively. [sent-122, score-2.133]
48 As the estimated sky index images solved by the flow in Figure 3, column (d) is used to generate images in column (c). [sent-123, score-1.119]
49 Column (c) is not just the negative images of column (d) because two pixels with the same sky index may have different intensities in the reconstructed image. [sent-124, score-1.161]
50 Qualitatively, our model captures the cloud distribution better than either the uniform sky index model or Li’s method [13]. [sent-126, score-1.295]
51 In Figure 6, we only show the portion of the sky (our region of interest), and the sky index images are shown such that brighter pixels indicate clearer sky. [sent-127, score-2.056]
52 We reconstruct the sky images only in gray scale because the Igawa sky model only defines the radiance distribution of the sky. [sent-128, score-1.909]
53 The sky index images is finer when m increases, which is shown in Figure 5. [sent-130, score-1.065]
54 The image of sky indices and reconstructed images are normalized to enhance the contrast for display. [sent-132, score-1.044]
55 Limitation of the proposed model There are some cases such that the sky index images determined by the algorithm of Figure 3 are inconsistent with a human’s perception. [sent-135, score-1.079]
56 This is unfortunate but expected because the Igawa sky model does not have volumetric concept of the clouds, and some physical phenomena (such as the shadows, the scattering of the sunlight, and reflection and refraction) within the clouds are not fully modeled. [sent-139, score-1.073]
57 If an overcast pixel happens to have similar appearance as that of a clear pixel and the normalized cross correlation between the neighborhood of the overcast pixel and the clear sky map is high, the overcast pixel can be incorrectly labeled as a clear sky pixel. [sent-140, score-2.325]
58 We call these 198 images the target data set and our goal is to show that our model is better than the uniform sky index model with the target data set in the following experiments. [sent-145, score-1.201]
59 Expressiveness In this experiment, we compare the expressiveness of our proposed model with that of the traditional uniform sky index model where all the sky pixels take the same label. [sent-150, score-2.115]
60 (7) where Ir (si) is the normalized intensity of si in the reconstructed sky image generated by the Igawa sky model with SI, and In (si) is the normalized intensity of si in the input sky image. [sent-153, score-3.051]
61 Some reconstructed images of both the uniform sky index model and our model with the target data set are shown in Figure 6. [sent-155, score-1.227]
62 As expected, the appearance of the reconstructed image of our model is more similar to the input image than that of the uniform sky index model. [sent-156, score-1.189]
63 Table 1 shows that the average ANRE of the target data set with the proposed model is lower than that of the uniform sky index model. [sent-157, score-1.154]
64 5% confidence, the ANRE of our model is lower than that of the uniform sky index model. [sent-162, score-1.131]
65 The reconstructed images using the uniform sky index model and our proposed model (one example per row). [sent-164, score-1.204]
66 Column (b) and (c) are the reconstructed sky images (brighter pixels indicate higher sky intensity) with the uniform sky index model and our model respectively. [sent-166, score-3.063]
67 Column (d) is the sky index images of our model, where clearer sky pixels are brighter. [sent-167, score-2.023]
68 Stability of sky index cam images, we measure the change of the sky index image (lNatCinI Cgo)tuh oar teslthgoiemrianthpeumth,sewkecyaium tielarzgae psnamora m yeactleoizrmsed (θcfr,o φsmc, nfo )ri. [sent-178, score-2.119]
69 The average ANRE of the target data set with both the uniform sky index model and our proposed model. [sent-186, score-1.154]
70 262 change D of two sky index images is measured by the following function: average D =n1? [sent-189, score-1.074]
71 i=n1|SIi1− SIi0|, (8) where SIi1 and SIi0 are the sky indices of sky pixel si with the perturbed and original camera parameters respectively. [sent-190, score-2.007]
72 Figure 10 shows the average D of the target data set under various Δθc and Δφc using the uniform sky index model (left half) and our model (right half). [sent-191, score-1.168]
73 The sky index image of our model is more stable than that of the uniform sky index model under the perturbation of θc and φc. [sent-192, score-2.2]
74 The results are expected because when Δθc and Δφc achieve certain amounts, the uniform sky index model will force all pixels to take another sky index, but our model will only change the sky index ofa pixel si ifthe lj maximizing NCC (si, lj) changes. [sent-193, score-3.275]
75 Geo-location estimation To compare the ability of predicting the longitude and latitude of the uniform sky index model and our model, we fix θc, φc, and f as the estimated values computed in Figure 3 but hypothesize pairs of longitude and latitude (on a 5 degree grid). [sent-196, score-1.242]
76 The flow of estimating the geo-location is shown in Figure 11and executed with both the uniform sky index model and our model. [sent-199, score-1.139]
77 The average change of sky index under different camera zenith and azimuth perturbations on the target data set. [sent-201, score-1.19]
78 The average change of our model (right half) is generally smoother than that of the uniform sky index model (left half). [sent-202, score-1.154]
79 We search for the geo-location that maximizes the average normalized cross correlation values between the sky maps and the input image at the corresponding location. [sent-204, score-0.978]
80 Accurate geo-location estimation often requires clear sky images [9, 12], shadow detection [21], or an image sequence where the sun is visible [12]. [sent-205, score-0.988]
81 The uniform sky index model only achieves the same criteria in 0. [sent-209, score-1.131]
82 Further, our model predicts more accurate geo-location than the uniform sky index model in 66. [sent-213, score-1.145]
83 The average surface errors made by our model and the uniform sky index model are 4898 km and 7209 km respectively. [sent-222, score-1.194]
84 99% confidence, our model predicts more accurate geo-location than the uniform sky index model does. [sent-224, score-1.145]
85 Given the sky indices of all the sky pixels, we can render the corresponding sky images at any time and location by the flow in Figure 4. [sent-227, score-2.83]
86 In other words, we can bring our favorite cloud distribution to the location and desired time that we want, which is impossible with the uniform sky index model. [sent-228, score-1.267]
87 The sky indices derived from our model may serve as a source of features for cloud classification (cirrus, cumulus, Figure 14. [sent-231, score-1.118]
88 Each number is the average sky index ranging from 0 (overcast) to 1 (clear) of the corresponding image. [sent-233, score-1.055]
89 [18] estimate cloudiness of the sky and other semantic attributes to categorize sky images, and we believe that it can achieve detailed classification by cloud types using our model. [sent-237, score-2.042]
90 Figure 14 orders the sky images based on the cloud cover by sorting the average sky indices. [sent-238, score-1.986]
91 The sky indices of our model are useful for cloud matching (deciding if two clouds are the same) or cloud tracking, a pre-processing step for some tasks in solar engineering [14]. [sent-239, score-1.415]
92 Note that in [11], sky is classified into one of the three categories: clear, partially cloudy, or completely overcast according to the general sky appearance. [sent-241, score-1.933]
93 Reconstructed sky images with the same cloud distribution under various time and locations. [sent-244, score-1.081]
94 Conclusion In this paper, we propose a novel sky representation includes a pixel-wise sky index to represent clouds. [sent-247, score-1.976]
95 We mulate our model as a labeling problem and solve the index for each sky pixel. [sent-248, score-1.08]
96 In our experiment, the proposed sky model surpasses uniform sky index model in three ways: expressiveness, bility under inaccurate camera parameter estimation, that forsky the staand the geo-locating ability. [sent-249, score-2.081]
97 We also demonstrate using the sky index image to produce sky images with the given cloud distribution at desired time and location. [sent-250, score-2.136]
98 In the future, we will incorporate color information and model physical phenomena such as refraction and sunlight scattering improve the reconstructed sky images. [sent-251, score-1.099]
99 Directional sky luminance versus cloud cover and solar position. [sent-274, score-1.195]
100 Models of sky radiance distribution and sky luminance distribution. [sent-294, score-1.913]
wordName wordTfidf (topN-words)
[('sky', 0.921), ('igawa', 0.15), ('cloud', 0.134), ('index', 0.134), ('solar', 0.112), ('overcast', 0.091), ('si', 0.078), ('cloudiness', 0.066), ('uniform', 0.062), ('anre', 0.058), ('amos', 0.056), ('clouds', 0.051), ('sii', 0.05), ('reconstructed', 0.049), ('indices', 0.049), ('scattering', 0.048), ('azimuth', 0.046), ('zenith', 0.042), ('lalonde', 0.033), ('altitude', 0.033), ('expressiveness', 0.032), ('luminance', 0.028), ('clear', 0.028), ('perez', 0.028), ('sunlight', 0.028), ('lj', 0.027), ('radiance', 0.027), ('ncc', 0.027), ('outdoor', 0.026), ('timestamp', 0.025), ('latitude', 0.025), ('longitude', 0.025), ('pixel', 0.023), ('flowchart', 0.023), ('target', 0.023), ('thin', 0.023), ('brighter', 0.021), ('rendering', 0.021), ('kittler', 0.021), ('clearer', 0.02), ('chromatic', 0.02), ('km', 0.02), ('brunger', 0.019), ('climate', 0.019), ('schpok', 0.019), ('proceedings', 0.018), ('column', 0.018), ('pixels', 0.017), ('hufnagel', 0.017), ('forecasting', 0.017), ('lez', 0.017), ('sj', 0.016), ('shadow', 0.016), ('distribution', 0.016), ('normalized', 0.015), ('physical', 0.015), ('degrees', 0.015), ('camera', 0.015), ('intensity', 0.015), ('mae', 0.014), ('refraction', 0.014), ('maximizes', 0.014), ('ic', 0.014), ('jacobs', 0.014), ('ir', 0.014), ('concept', 0.014), ('model', 0.014), ('arizona', 0.014), ('sun', 0.013), ('ij', 0.013), ('wacv', 0.013), ('energy', 0.013), ('portion', 0.012), ('intensities', 0.012), ('weather', 0.012), ('labeling', 0.011), ('hypothesize', 0.011), ('animation', 0.011), ('constants', 0.011), ('illumination', 0.01), ('falling', 0.01), ('half', 0.01), ('images', 0.01), ('conditions', 0.01), ('cross', 0.01), ('rectangles', 0.01), ('phenomena', 0.01), ('unary', 0.009), ('paired', 0.009), ('input', 0.009), ('surface', 0.009), ('incorrectly', 0.009), ('maps', 0.009), ('change', 0.009), ('equals', 0.008), ('flow', 0.008), ('tao', 0.008), ('ofl', 0.008), ('otuher', 0.008), ('bred', 0.008)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000008 215 iccv-2013-Incorporating Cloud Distribution in Sky Representation
Author: Kuan-Chuan Peng, Tsuhan Chen
Abstract: Most sky models only describe the cloudiness ofthe overall sky by a single category or parameter such as sky index, which does not account for the distribution of the clouds across the sky. To capture variable cloudiness, we extend the concept of sky index to a random field indicating the level of cloudiness of each sky pixel in our proposed sky representation based on the Igawa sky model. We formulate the problem of solving the sky index of every sky pixel as a labeling problem, where an approximate solution can be efficiently found. Experimental results show that our proposed sky model has better expressiveness, stability with respect to variation in camera parameters, and geo-location estimation in outdoor images compared to the uniform sky index model. Potential applications of our proposed sky model include sky image rendering, where sky images can be generated with an arbitrary cloud distribution at any time and any location, previously impossible with traditional sky models.
2 0.12212438 290 iccv-2013-New Graph Structured Sparsity Model for Multi-label Image Annotations
Author: Xiao Cai, Feiping Nie, Weidong Cai, Heng Huang
Abstract: In multi-label image annotations, because each image is associated to multiple categories, the semantic terms (label classes) are not mutually exclusive. Previous research showed that such label correlations can largely boost the annotation accuracy. However, all existing methods only directly apply the label correlation matrix to enhance the label inference and assignment without further learning the structural information among classes. In this paper, we model the label correlations using the relational graph, and propose a novel graph structured sparse learning model to incorporate the topological constraints of relation graph in multi-label classifications. As a result, our new method will capture and utilize the hidden class structures in relational graph to improve the annotation results. In proposed objective, a large number of structured sparsity-inducing norms are utilized, thus the optimization becomes difficult. To solve this problem, we derive an efficient optimization algorithm with proved convergence. We perform extensive experiments on six multi-label image annotation benchmark data sets. In all empirical results, our new method shows better annotation results than the state-of-the-art approaches.
3 0.10716159 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling
Author: Evgeny Levinkov, Mario Fritz
Abstract: Semantic road labeling is a key component of systems that aim at assisted or even autonomous driving. Considering that such systems continuously operate in the realworld, unforeseen conditions not represented in any conceivable training procedure are likely to occur on a regular basis. In order to equip systems with the ability to cope with such situations, we would like to enable adaptation to such new situations and conditions at runtime. Existing adaptive methods for image labeling either require labeled data from the new condition or even operate globally on a complete test set. None of this is a desirable mode of operation for a system as described above where new images arrive sequentially and conditions may vary. We study the effect of changing test conditions on scene labeling methods based on a new diverse street scene dataset. We propose a novel approach that can operate in such conditions and is based on a sequential Bayesian model update in order to robustly integrate the arriving images into the adapting procedure.
4 0.056940425 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects
Author: Benjamin Ummenhofer, Thomas Brox
Abstract: 3D reconstruction deals with the problem of finding the shape of an object from a set of images. Thin objects that have virtually no volumepose a special challengefor reconstruction with respect to shape representation and fusion of depth information. In this paper we present a dense pointbased reconstruction method that can deal with this special class of objects. We seek to jointly optimize a set of depth maps by treating each pixel as a point in space. Points are pulled towards a common surface by pairwise forces in an iterative scheme. The method also handles the problem of opposed surfaces by means of penalty forces. Efficient optimization is achieved by grouping points to superpixels and a spatial hashing approach for fast neighborhood queries. We show that the approach is on a par with state-of-the-art methods for standard multi view stereo settings and gives superior results for thin objects.
5 0.049302109 135 iccv-2013-Efficient Image Dehazing with Boundary Constraint and Contextual Regularization
Author: Gaofeng Meng, Ying Wang, Jiangyong Duan, Shiming Xiang, Chunhong Pan
Abstract: unkown-abstract
6 0.043189172 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint
7 0.041138891 387 iccv-2013-Shape Anchors for Data-Driven Multi-view Reconstruction
8 0.033890642 424 iccv-2013-Tracking Revisited Using RGBD Camera: Unified Benchmark and Baselines
9 0.03343812 56 iccv-2013-Automatic Registration of RGB-D Scans via Salient Directions
10 0.032409374 34 iccv-2013-Abnormal Event Detection at 150 FPS in MATLAB
11 0.02978513 111 iccv-2013-Detecting Dynamic Objects with Multi-view Background Subtraction
12 0.028944565 59 iccv-2013-Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation
13 0.026185125 405 iccv-2013-Structured Light in Sunlight
14 0.025829097 388 iccv-2013-Shape Index Descriptors Applied to Texture-Based Galaxy Analysis
15 0.025007835 366 iccv-2013-STAR3D: Simultaneous Tracking and Reconstruction of 3D Objects Using RGB-D Data
16 0.02491327 72 iccv-2013-Characterizing Layouts of Outdoor Scenes Using Spatial Topic Processes
17 0.024370806 2 iccv-2013-3D Scene Understanding by Voxel-CRF
18 0.021700278 416 iccv-2013-The Interestingness of Images
19 0.021511702 270 iccv-2013-Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking
20 0.02119099 286 iccv-2013-NYC3DCars: A Dataset of 3D Vehicles in Geographic Context
topicId topicWeight
[(0, 0.055), (1, -0.028), (2, -0.012), (3, -0.008), (4, 0.005), (5, 0.011), (6, -0.008), (7, -0.015), (8, -0.008), (9, -0.019), (10, -0.003), (11, -0.006), (12, 0.002), (13, 0.02), (14, 0.009), (15, -0.031), (16, -0.006), (17, -0.014), (18, -0.016), (19, -0.016), (20, -0.003), (21, 0.004), (22, 0.011), (23, -0.017), (24, -0.007), (25, -0.003), (26, 0.02), (27, -0.023), (28, 0.055), (29, 0.028), (30, 0.026), (31, -0.005), (32, 0.012), (33, -0.011), (34, -0.026), (35, 0.017), (36, -0.012), (37, 0.045), (38, -0.016), (39, 0.038), (40, -0.04), (41, -0.041), (42, -0.011), (43, -0.021), (44, -0.039), (45, -0.038), (46, -0.002), (47, 0.003), (48, -0.055), (49, -0.013)]
simIndex simValue paperId paperTitle
same-paper 1 0.91820627 215 iccv-2013-Incorporating Cloud Distribution in Sky Representation
Author: Kuan-Chuan Peng, Tsuhan Chen
Abstract: Most sky models only describe the cloudiness ofthe overall sky by a single category or parameter such as sky index, which does not account for the distribution of the clouds across the sky. To capture variable cloudiness, we extend the concept of sky index to a random field indicating the level of cloudiness of each sky pixel in our proposed sky representation based on the Igawa sky model. We formulate the problem of solving the sky index of every sky pixel as a labeling problem, where an approximate solution can be efficiently found. Experimental results show that our proposed sky model has better expressiveness, stability with respect to variation in camera parameters, and geo-location estimation in outdoor images compared to the uniform sky index model. Potential applications of our proposed sky model include sky image rendering, where sky images can be generated with an arbitrary cloud distribution at any time and any location, previously impossible with traditional sky models.
2 0.54461837 2 iccv-2013-3D Scene Understanding by Voxel-CRF
Author: Byung-Soo Kim, Pushmeet Kohli, Silvio Savarese
Abstract: Scene understanding is an important yet very challenging problem in computer vision. In the past few years, researchers have taken advantage of the recent diffusion of depth-RGB (RGB-D) cameras to help simplify the problem of inferring scene semantics. However, while the added 3D geometry is certainly useful to segment out objects with different depth values, it also adds complications in that the 3D geometry is often incorrect because of noisy depth measurements and the actual 3D extent of the objects is usually unknown because of occlusions. In this paper we propose a new method that allows us to jointly refine the 3D reconstruction of the scene (raw depth values) while accurately segmenting out the objects or scene elements from the 3D reconstruction. This is achieved by introducing a new model which we called Voxel-CRF. The Voxel-CRF model is based on the idea of constructing a conditional random field over a 3D volume of interest which captures the semantic and 3D geometric relationships among different elements (voxels) of the scene. Such model allows to jointly estimate (1) a dense voxel-based 3D reconstruction and (2) the semantic labels associated with each voxel even in presence of par- tial occlusions using an approximate yet efficient inference strategy. We evaluated our method on the challenging NYU Depth dataset (Version 1and 2). Experimental results show that our method achieves competitive accuracy in inferring scene semantics and visually appealing results in improving the quality of the 3D reconstruction. We also demonstrate an interesting application of object removal and scene completion from RGB-D images.
3 0.49344862 135 iccv-2013-Efficient Image Dehazing with Boundary Constraint and Contextual Regularization
Author: Gaofeng Meng, Ying Wang, Jiangyong Duan, Shiming Xiang, Chunhong Pan
Abstract: unkown-abstract
4 0.46884045 386 iccv-2013-Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling
Author: Evgeny Levinkov, Mario Fritz
Abstract: Semantic road labeling is a key component of systems that aim at assisted or even autonomous driving. Considering that such systems continuously operate in the realworld, unforeseen conditions not represented in any conceivable training procedure are likely to occur on a regular basis. In order to equip systems with the ability to cope with such situations, we would like to enable adaptation to such new situations and conditions at runtime. Existing adaptive methods for image labeling either require labeled data from the new condition or even operate globally on a complete test set. None of this is a desirable mode of operation for a system as described above where new images arrive sequentially and conditions may vary. We study the effect of changing test conditions on scene labeling methods based on a new diverse street scene dataset. We propose a novel approach that can operate in such conditions and is based on a sequential Bayesian model update in order to robustly integrate the arriving images into the adapting procedure.
5 0.46715781 447 iccv-2013-Volumetric Semantic Segmentation Using Pyramid Context Features
Author: Jonathan T. Barron, Mark D. Biggin, Pablo Arbeláez, David W. Knowles, Soile V.E. Keranen, Jitendra Malik
Abstract: We present an algorithm for the per-voxel semantic segmentation of a three-dimensional volume. At the core of our algorithm is a novel “pyramid context” feature, a descriptive representation designed such that exact per-voxel linear classification can be made extremely efficient. This feature not only allows for efficient semantic segmentation but enables other aspects of our algorithm, such as novel learned features and a stacked architecture that can reason about self-consistency. We demonstrate our technique on 3Dfluorescence microscopy data ofDrosophila embryosfor which we are able to produce extremely accurate semantic segmentations in a matter of minutes, and for which other algorithms fail due to the size and high-dimensionality of the data, or due to the difficulty of the task.
6 0.46133471 9 iccv-2013-A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera
7 0.44308108 228 iccv-2013-Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences
8 0.44183171 413 iccv-2013-Target-Driven Moire Pattern Synthesis by Phase Modulation
9 0.43762612 72 iccv-2013-Characterizing Layouts of Outdoor Scenes Using Spatial Topic Processes
10 0.43655062 331 iccv-2013-Pyramid Coding for Functional Scene Element Recognition in Video Scenes
11 0.43583193 407 iccv-2013-Subpixel Scanning Invariant to Indirect Lighting Using Quadratic Code Length
12 0.42719001 388 iccv-2013-Shape Index Descriptors Applied to Texture-Based Galaxy Analysis
13 0.42179522 433 iccv-2013-Understanding High-Level Semantics by Modeling Traffic Patterns
14 0.41947481 290 iccv-2013-New Graph Structured Sparsity Model for Multi-label Image Annotations
15 0.41533563 367 iccv-2013-SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels
16 0.41493928 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects
17 0.41206256 366 iccv-2013-STAR3D: Simultaneous Tracking and Reconstruction of 3D Objects Using RGB-D Data
18 0.40980157 420 iccv-2013-Topology-Constrained Layered Tracking with Latent Flow
19 0.40159962 30 iccv-2013-A Simple Model for Intrinsic Image Decomposition with Depth Cues
20 0.3992058 73 iccv-2013-Class-Specific Simplex-Latent Dirichlet Allocation for Image Classification
topicId topicWeight
[(2, 0.056), (26, 0.071), (31, 0.024), (42, 0.063), (48, 0.016), (64, 0.426), (73, 0.029), (89, 0.129)]
simIndex simValue paperId paperTitle
1 0.93818736 99 iccv-2013-Cross-View Action Recognition over Heterogeneous Feature Spaces
Author: Xinxiao Wu, Han Wang, Cuiwei Liu, Yunde Jia
Abstract: In cross-view action recognition, “what you saw” in one view is different from “what you recognize ” in another view. The data distribution even the feature space can change from one view to another due to the appearance and motion of actions drastically vary across different views. In this paper, we address the problem of transferring action models learned in one view (source view) to another different view (target view), where action instances from these two views are represented by heterogeneous features. A novel learning method, called Heterogeneous Transfer Discriminantanalysis of Canonical Correlations (HTDCC), is proposed to learn a discriminative common feature space for linking source and target views to transfer knowledge between them. Two projection matrices that respectively map data from source and target views into the common space are optimized via simultaneously minimizing the canonical correlations of inter-class samples and maximizing the intraclass canonical correlations. Our model is neither restricted to corresponding action instances in the two views nor restricted to the same type of feature, and can handle only a few or even no labeled samples available in the target view. To reduce the data distribution mismatch between the source and target views in the commonfeature space, a nonparametric criterion is included in the objective function. We additionally propose a joint weight learning method to fuse multiple source-view action classifiers for recognition in the target view. Different combination weights are assigned to different source views, with each weight presenting how contributive the corresponding source view is to the target view. The proposed method is evaluated on the IXMAS multi-view dataset and achieves promising results.
2 0.91605753 298 iccv-2013-Online Robust Non-negative Dictionary Learning for Visual Tracking
Author: Naiyan Wang, Jingdong Wang, Dit-Yan Yeung
Abstract: This paper studies the visual tracking problem in video sequences and presents a novel robust sparse tracker under the particle filter framework. In particular, we propose an online robust non-negative dictionary learning algorithm for updating the object templates so that each learned template can capture a distinctive aspect of the tracked object. Another appealing property of this approach is that it can automatically detect and reject the occlusion and cluttered background in a principled way. In addition, we propose a new particle representation formulation using the Huber loss function. The advantage is that it can yield robust estimation without using trivial templates adopted by previous sparse trackers, leading to faster computation. We also reveal the equivalence between this new formulation and the previous one which uses trivial templates. The proposed tracker is empirically compared with state-of-the-art trackers on some challenging video sequences. Both quantitative and qualitative comparisons show that our proposed tracker is superior and more stable.
3 0.90122014 88 iccv-2013-Constant Time Weighted Median Filtering for Stereo Matching and Beyond
Author: Ziyang Ma, Kaiming He, Yichen Wei, Jian Sun, Enhua Wu
Abstract: Despite the continuous advances in local stereo matching for years, most efforts are on developing robust cost computation and aggregation methods. Little attention has been seriously paid to the disparity refinement. In this work, we study weighted median filtering for disparity refinement. We discover that with this refinement, even the simple box filter aggregation achieves comparable accuracy with various sophisticated aggregation methods (with the same refinement). This is due to the nice weighted median filtering properties of removing outlier error while respecting edges/structures. This reveals that the previously overlooked refinement can be at least as crucial as aggregation. We also develop the first constant time algorithmfor the previously time-consuming weighted median filter. This makes the simple combination “box aggregation + weighted median ” an attractive solution in practice for both speed and accuracy. As a byproduct, the fast weighted median filtering unleashes its potential in other applications that were hampered by high complexities. We show its superiority in various applications such as depth upsampling, clip-art JPEG artifact removal, and image stylization.
4 0.88554996 242 iccv-2013-Learning People Detectors for Tracking in Crowded Scenes
Author: Siyu Tang, Mykhaylo Andriluka, Anton Milan, Konrad Schindler, Stefan Roth, Bernt Schiele
Abstract: People tracking in crowded real-world scenes is challenging due to frequent and long-term occlusions. Recent tracking methods obtain the image evidence from object (people) detectors, but typically use off-the-shelf detectors and treat them as black box components. In this paper we argue that for best performance one should explicitly train people detectors on failure cases of the overall tracker instead. To that end, we first propose a novel joint people detector that combines a state-of-the-art single person detector with a detector for pairs of people, which explicitly exploits common patterns of person-person occlusions across multiple viewpoints that are a frequent failure case for tracking in crowded scenes. To explicitly address remaining failure modes of the tracker we explore two methods. First, we analyze typical failures of trackers and train a detector explicitly on these cases. And second, we train the detector with the people tracker in the loop, focusing on the most common tracker failures. We show that our joint multi-person detector significantly improves both de- tection accuracy as well as tracker performance, improving the state-of-the-art on standard benchmarks.
5 0.85511601 166 iccv-2013-Finding Actors and Actions in Movies
Author: P. Bojanowski, F. Bach, I. Laptev, J. Ponce, C. Schmid, J. Sivic
Abstract: We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor/action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor/action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.
6 0.84881181 37 iccv-2013-Action Recognition and Localization by Hierarchical Space-Time Segments
7 0.82300073 441 iccv-2013-Video Motion for Every Visible Point
same-paper 8 0.80669117 215 iccv-2013-Incorporating Cloud Distribution in Sky Representation
9 0.75787497 380 iccv-2013-Semantic Transform: Weakly Supervised Semantic Inference for Relating Visual Attributes
10 0.74604517 303 iccv-2013-Orderless Tracking through Model-Averaged Posterior Estimation
11 0.71656477 442 iccv-2013-Video Segmentation by Tracking Many Figure-Ground Segments
12 0.6993798 86 iccv-2013-Concurrent Action Detection with Structural Prediction
13 0.686185 240 iccv-2013-Learning Maximum Margin Temporal Warping for Action Recognition
14 0.68411362 424 iccv-2013-Tracking Revisited Using RGBD Camera: Unified Benchmark and Baselines
15 0.66371882 359 iccv-2013-Robust Object Tracking with Online Multi-lifespan Dictionary Learning
16 0.65779054 425 iccv-2013-Tracking via Robust Multi-task Multi-view Joint Sparse Representation
17 0.64284652 417 iccv-2013-The Moving Pose: An Efficient 3D Kinematics Descriptor for Low-Latency Action Recognition and Detection
18 0.61070597 338 iccv-2013-Randomized Ensemble Tracking
19 0.6089552 320 iccv-2013-Pose-Configurable Generic Tracking of Elongated Objects
20 0.6060428 22 iccv-2013-A New Adaptive Segmental Matching Measure for Human Activity Recognition