cvpr cvpr2013 cvpr2013-84 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Nathan Jacobs, Mohammad T. Islam, Scott Workman
Abstract: We propose cloud motion as a natural scene cue that enables geometric calibration of static outdoor cameras. This work introduces several new methods that use observations of an outdoor scene over days and weeks to estimate radial distortion, focal length and geo-orientation. Cloud-based cues provide strong constraints and are an important alternative to methods that require specific forms of static scene geometry or clear sky conditions. Our method makes simple assumptions about cloud motion and builds upon previous work on motion-based and line-based calibration. We show results on real scenes that highlight the effectiveness of our proposed methods.
Reference: text
sentIndex sentText sentNum sentScore
1 edu s Abstract We propose cloud motion as a natural scene cue that enables geometric calibration of static outdoor cameras. [sent-4, score-0.725]
2 This work introduces several new methods that use observations of an outdoor scene over days and weeks to estimate radial distortion, focal length and geo-orientation. [sent-5, score-0.516]
3 Our method makes simple assumptions about cloud motion and builds upon previous work on motion-based and line-based calibration. [sent-7, score-0.343]
4 To maximize the value of webcams for such applications, we need to know the loca- tion, orientation, focal length and, more generally, the calibration of each camera. [sent-13, score-0.377]
5 This is challenging because often we only have time-stamped imagery from the camera and therefore traditional calibration approaches that rely on calibration targets or multiple views are not appropriate. [sent-14, score-0.478]
6 Each of these methods makes assumptions about the scene and then solves for the camera calibration parameters that best fit the observed image data. [sent-16, score-0.349]
7 We propose cloud motion as a new cue for static camera calibration. [sent-18, score-0.53]
8 We assume cloud motion is generally a horizontal translation and show how to use video of moving input: outdoor video(s) and known wind velocity Figure 1: We propose to use natural cues provided by cloud motion to calibrate static outdoor cameras. [sent-19, score-1.263]
9 clouds to estimate the radial distortion, horizon line, focal length and geo-orientation of the camera (see Fig. [sent-21, score-0.722]
10 The cloud motion cue is suitable for scenes in which a substantial amount of sky is visible, but does not require any particular static scene structure or direct camera access. [sent-23, score-0.682]
11 We view the cloud motion cue as complementary to previous work that explored other geometric calibration cues. [sent-24, score-0.541]
12 We aggregate motion statistics separately for sky regions in each video (Sec. [sent-28, score-0.261]
13 If radial distortion estimation is necessary, we estimate per-pixel flow vectors, fit streamlines, and use existing line-based techniques to estimate distortion parameters (Sec. [sent-30, score-0.627]
14 For 1 1 13 3 34 4 424 2 each video, we estimate the vanishing point of the cloud motion in a collection of videos captured on different days (Sec. [sent-33, score-1.288]
15 We then combine these individual wind estimates in various ways to estimate camera calibration parameters. [sent-36, score-0.648]
16 Given multiple days of video, with different wind directions, we can estimate the horizon line (Sec. [sent-37, score-0.811]
17 When the camera location and time-stamp is known for each video, we use publicly available wind velocity data to estimate the focal length of the camera and the pan, tilt and roll of the camera in geographic coordinates (Sec. [sent-40, score-0.85]
18 In this section, we provide background on how we aggregate motion information and describe our methods for estimating the radial distortion and the vanishing point of the cloud motion. [sent-47, score-1.401]
19 The six numbers in Ap and bp summarize the motion at each pixel, are very fast to compute and form the foundation for all of our calibration methods. [sent-63, score-0.335]
20 To estimate the average per-pixel optical flow for a single video sequence we can use the equation defined above; we use this for estimating radial distortion and visualization purposes. [sent-64, score-0.48]
21 Radial Distortion Estimation We propose to estimate radial distortion of an outdoor camera using our assumption of translational cloud motion. [sent-67, score-0.826]
22 We then use an offthe-shelf technique to estimate streamlines [17], which are curves that are tangent to the flow. [sent-69, score-0.217]
23 If our assumption on translational motion holds, this results in a set of imagespace curves that are the projection of straight lines in 3D (Fig. [sent-70, score-0.261]
24 We use these in a standard line-based approach for estimating radial distortion [18]. [sent-72, score-0.312]
25 In image regions that violate our motion assumptions, the advection process will often estimate streamlines that are not the projection of 3D lines. [sent-73, score-0.36]
26 We assume that the radial distortion is fairly mild; our score is the inverse of the sum-of-squared differences from a quadratic approximation of the streamline. [sent-75, score-0.281]
27 This approach only requires a single video, but we can combine streamlines from multiple days and use this method to automatically select the best streamlines. [sent-77, score-0.26]
28 Estimating the Cloud Motion Vanishing Point Our remaining calibration methods require an estimate of the vanishing point of the cloud motion. [sent-83, score-1.168]
29 We found that directly computing optical flow from the structure tensor, as we did in the previous section, resulted in noisy vanishing point estimates, despite attempting numerous approaches to robust estimation. [sent-84, score-0.791]
30 We further constrain the motion at each pixel, p, to be in the direction of the vanishing point such that [xp, = αp(v − p). [sent-87, score-0.837]
31 This results in the following objective function(v vth −at depends on ethsuel vanishing point ilnocga otibojne,c v, aen fdu tnhcescaled, local motion magnitudes, {αp}: yp]T f(v,{αp}) =? [sent-88, score-0.858]
32 To estimate the vanishing point for a scene, we minimize (1) using a non-linear minimization approach similar to [4], with the key difference being that the camera calibration is unknown. [sent-93, score-1.06]
33 (b) The highest ranking streamlines across all videos with color representing the video of origin. [sent-98, score-0.293]
34 known magnitudes, {αp} are replaced with the optimal values watn mthea corresponding v, subject dt ow itthhe t hceon ospttriaminatl tvhaaltthey result in flows that all point toward or away from the vanishing point. [sent-99, score-0.758]
35 For example, if the vanishing point is a sink, which means the wind is blowing away from the camera, we set αp = max(αp, 0). [sent-100, score-1.008]
36 We then update the vanishing point location, v, using the gradient of (1): ∂∂fv=? [sent-101, score-0.726]
37 Related Work on Constrained Motion Estimation The vanishing point estimation step of our work can be seen as a egomotion problem, except with the camera as the inertial reference frame instead ofthe scene. [sent-106, score-0.889]
38 Our vanishing point estimation approach is most similar to work on direct motion estimation [9, 4], which estimates motion directly from image derivatives. [sent-108, score-1.062]
39 1, we extend this work to estimate multiple discrete vanishing points, each constrained to the same horizon line. [sent-111, score-1.078]
40 [19] propose two methods for horizon line estimation for translational dynamic textures. [sent-113, score-0.467]
41 The second method assumes that the pattern is a weak-sense stationary space-time process; we attempted to aggregate motion statistics across all videos for a single scene and found that this assumption was rarely satisfied. [sent-115, score-0.288]
42 We believe this is due to biases introduced by prevailing wind directions. [sent-116, score-0.282]
43 (a)unconstrainedflowvectors(b)constrainedflowvectors (c)er orsurface(sink) (d)er orsurface(source) Figure 3: Estimated cloud motions: (a) flow vectors estimated independently at each pixel and (b) our globally constrained direct estimation approach. [sent-117, score-0.361]
44 The dot (red) is the estimated vanishing point location. [sent-118, score-0.752]
45 (c,d) For the same scene, false color images (with original image outlined in blue for reference) in which intensity corresponds to the value of our error function (1) for different vanishing points (dark values correspond to low error). [sent-119, score-0.768]
46 In (c) the vanishing point is considered a sink, and in (d) it is considered a source. [sent-120, score-0.726]
47 These error surfaces show that the vanishing point is a source located on the left side of the image. [sent-121, score-0.76]
48 Geometric Calibration Using Cloud Motion Our assumption that cloud motion is largely translational and horizontal enables us to solve a broad range of camera calibration problems. [sent-123, score-0.683]
49 We begin with an approach for estimating the horizon line. [sent-124, score-0.344]
50 Horizon Line Estimation We jointly estimate image-space vanishing points, V¯, that are consistent with our assumption of horizontal cloud motion. [sent-127, score-0.981]
51 We start with the set of vanishing points estimated on individual days, V, and estimate an optimal set of vanishing points that are constrained to the horizon line, V¯. [sent-128, score-1.878]
52 Since individual vanishing points may be incorrect, we need a principled means of combining vanishing points. [sent-129, score-1.428]
53 At a minimum, we need 2 videos with independent wind directions (not in the same or opposite directions), however, as usual, more videos results in a more reliable horizon line estimate. [sent-130, score-0.844]
54 We propose to extend our method for single-day vanishing point estimation to multiple days. [sent-131, score-0.756]
55 Directly fitting a line to the estimated vanishing points ignores the relative confidences associated with each vanishing point due to differing cloud conditions. [sent-134, score-1.802]
56 This means that fitting a line directly to the set of vanishing points often fails to find an accurate horizon line, which leads to a cascade of errors in the remaining calibration steps. [sent-135, score-1.283]
57 (4) subject to the constraint that the vanishing points are collinear. [sent-138, score-0.734]
58 We use two variables that represent the height of the horizon line at the left and right edge of the image, hl , hr, and represent the vanishing points by their distance from the center of the image along the horizon line, φi. [sent-140, score-1.427]
59 Wiche optimize tzhee t objective f {uhnction using gradient descent with initial conditions determined by a line fit directly to the vanishing points, V. [sent-143, score-0.834]
60 The gradient of the vanishing point location, φi, is computed by projecting the unconstrained vanishing point on to the horizon line, as follows: ∂∂φFi=|(|hhrr−− hhll|)|T∂∂vFi (6) where (hr −hl) is a vector along the horizon line. [sent-144, score-2.056]
61 We compute the gradients of the horizon line points by aggregating ∂∂vfi. [sent-145, score-0.414]
62 the normal components of the unconstrained gradients, We compute the new line parameters that would result if we performed the updates specified by the gradients and use the difference from the new line and the current line as the gradient. [sent-146, score-0.266]
63 Focal Length and Geo-Orientation Estimation We use the constrained vanishing points, V¯, to estimate the focal length and geo-orientation of the camera. [sent-149, score-0.892]
64 Many approaches have been proposed for using vanishing points for camera calibration; they typically require mutually orthogonal vanishing points [7] or, more generally, known angles [22]. [sent-150, score-1.571]
65 In our approach, we consider vanishing points as noisy observations of a translational motion with known geo-orientation. [sent-151, score-0.908]
66 Using this information, we query a weather database to obtain an estimate of the wind velocity, wi, for each video. [sent-153, score-0.381]
67 If the camera is correctly calibrated, the cloud-motion vanishing points will correspond with the true cloud motion which is largely determined by the wind velocity. [sent-154, score-1.441]
68 Since we know the horizon line, there are only two unknowns, the camera azimuth, θ, and the focal length, f. [sent-156, score-0.495]
69 1 The intuition behind this error is that if the vanishing points are perfectly aligned with the wind vectors, the error will be zero, and if they are in exactly the wrong direction, the error will be twice the total magnitude of the wind vectors. [sent-163, score-1.4]
70 Evaluation We evaluate the quantitative and qualitative performance of our proposed calibration approaches and vanishing point estimation method on video from seven outdoor scenes, including six from the LOST dataset [2]. [sent-166, score-1.092]
71 Automated Video Filtering On certain days, individual videos captured “in the wild” may violate our assumption that cloud motion can be modeled as a single translation. [sent-172, score-0.459]
72 This metric is the ratio of two quantities: the numerator is the optimal value of the vanishing point objective function (1) and the denominator is the median of the same function on the grid of samples defined in Sec. [sent-174, score-0.747]
73 The intuition behind this metric is that a large improvement in this error ratio suggests that there is a unique vanishing point. [sent-177, score-0.728]
74 Automatic Radial Distortion Correction We evaluated our cloud-based distortion estimation method and found that it reliably estimated the distortion on the scenes in our dataset (Fig. [sent-185, score-0.391]
75 We found that the results from our automatically selected lines were qualitatively better than using manually selected lines as input to the same line-based distortion estimation routine. [sent-190, score-0.267]
76 To better understand the performance of our method, we compare distortion estimates using different numbers of videos to the final calibration result obtained using all videos. [sent-192, score-0.441]
77 Vanishing Point Estimation Using all the pixels from the sky region to find the vanishing point is computationally expensive, so in practice we subsample pixels. [sent-199, score-0.788]
78 For each video in our dataset, we computed the vanishing point for differing numbers of random samples ofpixels. [sent-204, score-0.796]
79 the vanishing point computed using the largest sample size across all videos with 15 random samples for each setting. [sent-208, score-0.811]
80 6) demonstrate that ten thousand pixels results in average vanishing point estima1 1 13 3 34 4 468 6 Figure 4: (left) All videos from a single scene sorted by the error ratio defined in Section 4. [sent-210, score-0.916]
81 Figure 6: Increasing the number of pixels included in the vanishing point estimation process (Sec. [sent-240, score-0.756]
82 1 to estimate the horizon line and a set of constrained vanishing points. [sent-248, score-1.16]
83 7 show that even when some of the initial vanishing points are not close to the horizon, our method is able to combine all of the days of data and estimate an accurate horizon line. [sent-250, score-1.181]
84 For each video, we know the timestamp; if possible we use the location and time-stamp to archive upper-level wind data from the FAA2; otherwise we use ground-level wind data from Weather Underground3. [sent-254, score-0.586]
85 s1t48◦) For the FAA data, we linearly interpolate between the provided times and location for each elevation and then average across elevations to obtain a single wind velocity estimate for each video. [sent-264, score-0.402]
86 Based on these results and visual inspection of the error function (7), the cloud motion cue provides a stronger constraint on the camera orientation than on the focal length. [sent-269, score-0.629]
87 We believe this is largely due to errors in the wind data. [sent-271, score-0.282]
88 Based on manual inspection we see many cases in which the estimated wind data does not agree with the observed motion of the clouds. [sent-272, score-0.447]
89 Conclusion We have shown that cloud motion provides geometric cues that enable camera calibration. [sent-275, score-0.425]
90 We introduced automated methods for estimating the radial distortion, horizon line, focal length and geo-orientation of a static outdoor camera from video captured over many days. [sent-276, score-0.887]
91 These meth1 1 13 3 34 4 479 7 ods enable calibration in scenes which do not contain sufficient static geometric information for more traditional calibration techniques, such as orthogonal vanishing points [3] or registration with known geometry [1]. [sent-277, score-1.182]
92 Our work is based on several assumptions about cloud motion that are not always satisfied. [sent-278, score-0.343]
93 To handle such violations, for example when the wind direction changes or two layers of clouds are moving in different directions, we propose to filter out videos based on the uniqueness of the vanishing point. [sent-280, score-1.113]
94 We further assume that cloud motion is in the hori- zontal plane. [sent-281, score-0.322]
95 Acknowledgments We thank Robert Pless, Austin Abrams, Jim Tucek and Joshua Little for collecting the LOST dataset and Jim Knochelmann for collecting the wind velocity metadata. [sent-283, score-0.365]
96 Using cloud shadows to infer scene structure and camera calibration. [sent-346, score-0.372]
97 Radiometric calibration with illumination change for outdoor scene analysis. [sent-381, score-0.298]
98 (left) For each scene we show a sample image overlaid with the single-video vanishing points (red dots), the estimated horizon line (green) and the final constrained vanishing points (connected to the original by a red line). [sent-448, score-1.956]
99 The lines emanating from the center are the world-space rays that correspond to the estimated vanishing points. [sent-451, score-0.764]
100 The lines outside the compass ring represent the wind velocities used for calibration. [sent-452, score-0.326]
wordName wordTfidf (topN-words)
[('vanishing', 0.694), ('horizon', 0.292), ('wind', 0.282), ('cloud', 0.211), ('calibration', 0.175), ('streamlines', 0.161), ('distortion', 0.149), ('radial', 0.132), ('motion', 0.111), ('camera', 0.103), ('days', 0.099), ('outdoor', 0.092), ('videos', 0.085), ('line', 0.082), ('focal', 0.078), ('webcams', 0.074), ('jacobs', 0.068), ('abrams', 0.066), ('translational', 0.063), ('sky', 0.062), ('static', 0.061), ('estimate', 0.056), ('lost', 0.048), ('video', 0.047), ('conference', 0.047), ('velocity', 0.045), ('cue', 0.044), ('lines', 0.044), ('weather', 0.043), ('aggregate', 0.041), ('points', 0.04), ('fridrich', 0.04), ('jim', 0.04), ('orsurface', 0.04), ('taptbp', 0.04), ('tucek', 0.04), ('scenes', 0.037), ('flow', 0.036), ('constrained', 0.036), ('streamline', 0.036), ('sink', 0.035), ('error', 0.034), ('clouds', 0.033), ('thirty', 0.033), ('uky', 0.033), ('point', 0.032), ('estimates', 0.032), ('violate', 0.032), ('flows', 0.032), ('scene', 0.031), ('magnitudes', 0.031), ('roman', 0.031), ('estimating', 0.031), ('estimation', 0.03), ('correction', 0.03), ('egomotion', 0.03), ('optical', 0.029), ('inspection', 0.028), ('length', 0.028), ('hl', 0.027), ('bp', 0.027), ('shadows', 0.027), ('roll', 0.027), ('estimated', 0.026), ('environmental', 0.026), ('webcam', 0.026), ('imagery', 0.025), ('tilt', 0.025), ('vf', 0.024), ('straight', 0.023), ('differing', 0.023), ('sheikh', 0.023), ('automated', 0.023), ('know', 0.022), ('six', 0.022), ('direct', 0.022), ('monitoring', 0.022), ('thousand', 0.021), ('hr', 0.021), ('assumptions', 0.021), ('begin', 0.021), ('objective', 0.021), ('overlaid', 0.021), ('unknowns', 0.02), ('yp', 0.02), ('unconstrained', 0.02), ('orientation', 0.02), ('international', 0.02), ('assumption', 0.02), ('interpolate', 0.019), ('workshop', 0.019), ('collecting', 0.019), ('filter', 0.019), ('ten', 0.019), ('ieee', 0.019), ('tensor', 0.019), ('fit', 0.019), ('optimize', 0.018), ('px', 0.018), ('directions', 0.018)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999934 84 cvpr-2013-Cloud Motion as a Calibration Cue
Author: Nathan Jacobs, Mohammad T. Islam, Scott Workman
Abstract: We propose cloud motion as a natural scene cue that enables geometric calibration of static outdoor cameras. This work introduces several new methods that use observations of an outdoor scene over days and weeks to estimate radial distortion, focal length and geo-orientation. Cloud-based cues provide strong constraints and are an important alternative to methods that require specific forms of static scene geometry or clear sky conditions. Our method makes simple assumptions about cloud motion and builds upon previous work on motion-based and line-based calibration. We show results on real scenes that highlight the effectiveness of our proposed methods.
Author: Yiliang Xu, Sangmin Oh, Anthony Hoogs
Abstract: We present a novel vanishing point detection algorithm for uncalibrated monocular images of man-made environments. We advance the state-of-the-art by a new model of measurement error in the line segment extraction and minimizing its impact on the vanishing point estimation. Our contribution is twofold: 1) Beyond existing hand-crafted models, we formally derive a novel consistency measure, which captures the stochastic nature of the correlation between line segments and vanishing points due to the measurement error, and use this new consistency measure to improve the line segment clustering. 2) We propose a novel minimum error vanishing point estimation approach by optimally weighing the contribution of each line segment pair in the cluster towards the vanishing point estimation. Unlike existing works, our algorithm provides an optimal solution that minimizes the uncertainty of the vanishing point in terms of the trace of its covariance, in a closed-form. We test our algorithm and compare it with the state-of-the-art on two public datasets: York Urban Dataset and Eurasian Cities Dataset. The experiments show that our approach outperforms the state-of-the-art.
3 0.19840235 344 cvpr-2013-Radial Distortion Self-Calibration
Author: José Henrique Brito, Roland Angst, Kevin Köser, Marc Pollefeys
Abstract: In cameras with radial distortion, straight lines in space are in general mapped to curves in the image. Although epipolar geometry also gets distorted, there is a set of special epipolar lines that remain straight, namely those that go through the distortion center. By finding these straight epipolar lines in camera pairs we can obtain constraints on the distortion center(s) without any calibration object or plumbline assumptions in the scene. Although this holds for all radial distortion models we conceptually prove this idea using the division distortion model and the radial fundamental matrix which allow for a very simple closed form solution of the distortion center from two views (same distortion) or three views (different distortions). The non-iterative nature of our approach makes it immune to local minima and allows finding the distortion center also for cropped images or those where no good prior exists. Besides this, we give comprehensive relations between different undistortion models and discuss advantages and drawbacks.
4 0.13643716 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?
Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello
Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.
5 0.13162427 12 cvpr-2013-A Global Approach for the Detection of Vanishing Points and Mutually Orthogonal Vanishing Directions
Author: Michel Antunes, João P. Barreto
Abstract: This article presents a new global approach for detecting vanishing points and groups of mutually orthogonal vanishing directions using lines detected in images of man-made environments. These two multi-model fitting problems are respectively cast as Uncapacited Facility Location (UFL) and Hierarchical Facility Location (HFL) instances that are efficiently solved using a message passing inference algorithm. We also propose new functions for measuring the consistency between an edge and aputative vanishingpoint, and for computing the vanishing point defined by a subset of edges. Extensive experiments in both synthetic and real images show that our algorithms outperform the state-ofthe-art methods while keeping computation tractable. In addition, we show for the first time results in simultaneously detecting multiple Manhattan-world configurations that can either share one vanishing direction (Atlanta world) or be completely independent.
6 0.11498455 176 cvpr-2013-Five Shades of Grey for Fast and Reliable Camera Pose Estimation
7 0.11345875 278 cvpr-2013-Manhattan Junction Catalogue for Spatial Reasoning of Indoor Scenes
8 0.10847858 368 cvpr-2013-Rolling Shutter Camera Calibration
9 0.1076827 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras
10 0.10346144 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging
11 0.10128543 4 cvpr-2013-3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image
12 0.099665798 124 cvpr-2013-Determining Motion Directly from Normal Flows Upon the Use of a Spherical Eye Platform
13 0.091931291 158 cvpr-2013-Exploring Weak Stabilization for Motion Feature Extraction
14 0.089866027 244 cvpr-2013-Large Displacement Optical Flow from Nearest Neighbor Fields
15 0.087194264 242 cvpr-2013-Label Propagation from ImageNet to 3D Point Clouds
16 0.087049045 423 cvpr-2013-Template-Based Isometric Deformable 3D Reconstruction with Sampling-Based Focal Length Self-Calibration
17 0.076002099 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation
18 0.075716823 381 cvpr-2013-Scene Parsing by Integrating Function, Geometry and Appearance Models
19 0.075607099 233 cvpr-2013-Joint Sparsity-Based Representation and Analysis of Unconstrained Activities
20 0.075132221 428 cvpr-2013-The Episolar Constraint: Monocular Shape from Shadow Correspondence
topicId topicWeight
[(0, 0.148), (1, 0.11), (2, 0.008), (3, -0.019), (4, -0.031), (5, -0.034), (6, -0.02), (7, -0.036), (8, -0.005), (9, 0.06), (10, 0.036), (11, 0.077), (12, 0.119), (13, -0.044), (14, -0.061), (15, -0.012), (16, 0.104), (17, 0.11), (18, -0.101), (19, 0.016), (20, 0.042), (21, 0.016), (22, -0.063), (23, -0.074), (24, 0.05), (25, -0.015), (26, -0.068), (27, -0.041), (28, -0.007), (29, 0.079), (30, -0.076), (31, 0.082), (32, -0.02), (33, 0.238), (34, -0.156), (35, 0.151), (36, -0.113), (37, -0.023), (38, 0.051), (39, 0.112), (40, 0.09), (41, -0.155), (42, 0.001), (43, 0.014), (44, 0.035), (45, -0.073), (46, -0.066), (47, 0.037), (48, -0.01), (49, 0.033)]
simIndex simValue paperId paperTitle
same-paper 1 0.92689073 84 cvpr-2013-Cloud Motion as a Calibration Cue
Author: Nathan Jacobs, Mohammad T. Islam, Scott Workman
Abstract: We propose cloud motion as a natural scene cue that enables geometric calibration of static outdoor cameras. This work introduces several new methods that use observations of an outdoor scene over days and weeks to estimate radial distortion, focal length and geo-orientation. Cloud-based cues provide strong constraints and are an important alternative to methods that require specific forms of static scene geometry or clear sky conditions. Our method makes simple assumptions about cloud motion and builds upon previous work on motion-based and line-based calibration. We show results on real scenes that highlight the effectiveness of our proposed methods.
Author: Yiliang Xu, Sangmin Oh, Anthony Hoogs
Abstract: We present a novel vanishing point detection algorithm for uncalibrated monocular images of man-made environments. We advance the state-of-the-art by a new model of measurement error in the line segment extraction and minimizing its impact on the vanishing point estimation. Our contribution is twofold: 1) Beyond existing hand-crafted models, we formally derive a novel consistency measure, which captures the stochastic nature of the correlation between line segments and vanishing points due to the measurement error, and use this new consistency measure to improve the line segment clustering. 2) We propose a novel minimum error vanishing point estimation approach by optimally weighing the contribution of each line segment pair in the cluster towards the vanishing point estimation. Unlike existing works, our algorithm provides an optimal solution that minimizes the uncertainty of the vanishing point in terms of the trace of its covariance, in a closed-form. We test our algorithm and compare it with the state-of-the-art on two public datasets: York Urban Dataset and Eurasian Cities Dataset. The experiments show that our approach outperforms the state-of-the-art.
3 0.82063448 12 cvpr-2013-A Global Approach for the Detection of Vanishing Points and Mutually Orthogonal Vanishing Directions
Author: Michel Antunes, João P. Barreto
Abstract: This article presents a new global approach for detecting vanishing points and groups of mutually orthogonal vanishing directions using lines detected in images of man-made environments. These two multi-model fitting problems are respectively cast as Uncapacited Facility Location (UFL) and Hierarchical Facility Location (HFL) instances that are efficiently solved using a message passing inference algorithm. We also propose new functions for measuring the consistency between an edge and aputative vanishingpoint, and for computing the vanishing point defined by a subset of edges. Extensive experiments in both synthetic and real images show that our algorithms outperform the state-ofthe-art methods while keeping computation tractable. In addition, we show for the first time results in simultaneously detecting multiple Manhattan-world configurations that can either share one vanishing direction (Atlanta world) or be completely independent.
4 0.76508349 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging
Author: Jinwei Ye, Yu Ji, Jingyi Yu
Abstract: A Manhattan World (MW) [3] is composed of planar surfaces and parallel lines aligned with three mutually orthogonal principal axes. Traditional MW understanding algorithms rely on geometry priors such as the vanishing points and reference (ground) planes for grouping coplanar structures. In this paper, we present a novel single-image MW reconstruction algorithm from the perspective of nonpinhole cameras. We show that by acquiring the MW using an XSlit camera, we can instantly resolve coplanarity ambiguities. Specifically, we prove that parallel 3D lines map to 2D curves in an XSlit image and they converge at an XSlit Vanishing Point (XVP). In addition, if the lines are coplanar, their curved images will intersect at a second common pixel that we call Coplanar Common Point (CCP). CCP is a unique image feature in XSlit cameras that does not exist in pinholes. We present a comprehensive theory to analyze XVPs and CCPs in a MW scene and study how to recover 3D geometry in a complex MW scene from XVPs and CCPs. Finally, we build a prototype XSlit camera by using two layers of cylindrical lenses. Experimental results × on both synthetic and real data show that our new XSlitcamera-based solution provides an effective and reliable solution for MW understanding.
5 0.72138512 344 cvpr-2013-Radial Distortion Self-Calibration
Author: José Henrique Brito, Roland Angst, Kevin Köser, Marc Pollefeys
Abstract: In cameras with radial distortion, straight lines in space are in general mapped to curves in the image. Although epipolar geometry also gets distorted, there is a set of special epipolar lines that remain straight, namely those that go through the distortion center. By finding these straight epipolar lines in camera pairs we can obtain constraints on the distortion center(s) without any calibration object or plumbline assumptions in the scene. Although this holds for all radial distortion models we conceptually prove this idea using the division distortion model and the radial fundamental matrix which allow for a very simple closed form solution of the distortion center from two views (same distortion) or three views (different distortions). The non-iterative nature of our approach makes it immune to local minima and allows finding the distortion center also for cropped images or those where no good prior exists. Besides this, we give comprehensive relations between different undistortion models and discuss advantages and drawbacks.
6 0.69673133 368 cvpr-2013-Rolling Shutter Camera Calibration
7 0.62319869 176 cvpr-2013-Five Shades of Grey for Fast and Reliable Camera Pose Estimation
8 0.59148216 278 cvpr-2013-Manhattan Junction Catalogue for Spatial Reasoning of Indoor Scenes
9 0.55621511 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras
10 0.53727347 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display
11 0.53370845 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?
12 0.49908185 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas
13 0.47396541 290 cvpr-2013-Motion Estimation for Self-Driving Cars with a Generalized Camera
14 0.47263691 35 cvpr-2013-Adaptive Compressed Tomography Sensing
15 0.47079074 333 cvpr-2013-Plane-Based Content Preserving Warps for Video Stabilization
16 0.46560156 124 cvpr-2013-Determining Motion Directly from Normal Flows Upon the Use of a Spherical Eye Platform
17 0.44883564 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation
19 0.42846474 428 cvpr-2013-The Episolar Constraint: Monocular Shape from Shadow Correspondence
20 0.42333013 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems
topicId topicWeight
[(1, 0.219), (10, 0.142), (16, 0.023), (26, 0.05), (28, 0.026), (33, 0.27), (65, 0.013), (67, 0.039), (69, 0.026), (87, 0.082)]
simIndex simValue paperId paperTitle
same-paper 1 0.86664355 84 cvpr-2013-Cloud Motion as a Calibration Cue
Author: Nathan Jacobs, Mohammad T. Islam, Scott Workman
Abstract: We propose cloud motion as a natural scene cue that enables geometric calibration of static outdoor cameras. This work introduces several new methods that use observations of an outdoor scene over days and weeks to estimate radial distortion, focal length and geo-orientation. Cloud-based cues provide strong constraints and are an important alternative to methods that require specific forms of static scene geometry or clear sky conditions. Our method makes simple assumptions about cloud motion and builds upon previous work on motion-based and line-based calibration. We show results on real scenes that highlight the effectiveness of our proposed methods.
2 0.85078305 238 cvpr-2013-Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices
Author: Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
Abstract: Symmetric Positive Definite (SPD) matrices have become popular to encode image information. Accounting for the geometry of the Riemannian manifold of SPD matrices has proven key to the success of many algorithms. However, most existing methods only approximate the true shape of the manifold locally by its tangent plane. In this paper, inspired by kernel methods, we propose to map SPD matrices to a high dimensional Hilbert space where Euclidean geometry applies. To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices. These kernels are derived from the Gaussian kernel, but exploit different metrics on the manifold. This lets us extend kernel-based algorithms developed for Euclidean spaces, such as SVM and kernel PCA, to the Riemannian manifold of SPD matrices. We demonstrate the benefits of our approach on the problems of pedestrian detection, object categorization, texture analysis, 2D motion segmentation and Diffusion Tensor Imaging (DTI) segmentation.
3 0.84213138 383 cvpr-2013-Seeking the Strongest Rigid Detector
Author: Rodrigo Benenson, Markus Mathias, Tinne Tuytelaars, Luc Van_Gool
Abstract: The current state of the art solutions for object detection describe each class by a set of models trained on discovered sub-classes (so called “components ”), with each model itself composed of collections of interrelated parts (deformable models). These detectors build upon the now classic Histogram of Oriented Gradients+linear SVM combo. In this paper we revisit some of the core assumptions in HOG+SVM and show that by properly designing the feature pooling, feature selection, preprocessing, and training methods, it is possible to reach top quality, at least for pedestrian detections, using a single rigid component. We provide experiments for a large design space, that give insights into the design of classifiers, as well as relevant information for practitioners. Our best detector is fully feed-forward, has a single unified architecture, uses only histograms of oriented gradients and colour information in monocular static images, and improves over 23 other methods on the INRIA, ETHand Caltech-USA datasets, reducing the average miss-rate over HOG+SVM by more than 30%.
4 0.84029734 374 cvpr-2013-Saliency Aggregation: A Data-Driven Approach
Author: Long Mai, Yuzhen Niu, Feng Liu
Abstract: A variety of methods have been developed for visual saliency analysis. These methods often complement each other. This paper addresses the problem of aggregating various saliency analysis methods such that the aggregation result outperforms each individual one. We have two major observations. First, different methods perform differently in saliency analysis. Second, the performance of a saliency analysis method varies with individual images. Our idea is to use data-driven approaches to saliency aggregation that appropriately consider the performance gaps among individual methods and the performance dependence of each method on individual images. This paper discusses various data-driven approaches and finds that the image-dependent aggregation method works best. Specifically, our method uses a Conditional Random Field (CRF) framework for saliency aggregation that not only models the contribution from individual saliency map but also the interaction between neighboringpixels. To account for the dependence of aggregation on an individual image, our approach selects a subset of images similar to the input image from a training data set and trains the CRF aggregation model only using this subset instead of the whole training set. Our experiments on public saliency benchmarks show that our aggregation method outperforms each individual saliency method and is robust with the selection of aggregated methods.
5 0.8153019 248 cvpr-2013-Learning Collections of Part Models for Object Recognition
Author: Ian Endres, Kevin J. Shih, Johnston Jiaa, Derek Hoiem
Abstract: We propose a method to learn a diverse collection of discriminative parts from object bounding box annotations. Part detectors can be trained and applied individually, which simplifies learning and extension to new features or categories. We apply the parts to object category detection, pooling part detections within bottom-up proposed regions and using a boosted classifier with proposed sigmoid weak learners for scoring. On PASCAL VOC 2010, we evaluate the part detectors ’ ability to discriminate and localize annotated keypoints. Our detection system is competitive with the best-existing systems, outperforming other HOG-based detectors on the more deformable categories.
6 0.81527203 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation
7 0.8150332 143 cvpr-2013-Efficient Large-Scale Structured Learning
8 0.81465316 360 cvpr-2013-Robust Estimation of Nonrigid Transformation for Point Set Registration
9 0.81463808 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking
10 0.81439042 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path
11 0.81434071 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities
12 0.81429416 414 cvpr-2013-Structure Preserving Object Tracking
13 0.81425053 406 cvpr-2013-Spatial Inference Machines
14 0.81403887 408 cvpr-2013-Spatiotemporal Deformable Part Models for Action Detection
15 0.81400508 393 cvpr-2013-Separating Signal from Noise Using Patch Recurrence across Scales
16 0.81379545 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image
17 0.81299996 121 cvpr-2013-Detection- and Trajectory-Level Exclusion in Multiple Object Tracking
18 0.81287402 104 cvpr-2013-Deep Convolutional Network Cascade for Facial Point Detection
19 0.81259692 242 cvpr-2013-Label Propagation from ImageNet to 3D Point Clouds
20 0.81258011 267 cvpr-2013-Least Soft-Threshold Squares Tracking