cvpr cvpr2013 cvpr2013-283 knowledge-graph by maker-knowledge-mining

283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas


Source: pdf

Author: Christian Richardt, Yael Pritch, Henning Zimmer, Alexander Sorkine-Hornung

Abstract: We present a solution for generating high-quality stereo panoramas at megapixel resolutions. While previous approaches introduced the basic principles, we show that those techniques do not generalise well to today’s high image resolutions and lead to disturbing visual artefacts. As our first contribution, we describe the necessary correction steps and a compact representation for the input images in order to achieve a highly accurate approximation to the required ray space. Our second contribution is a flow-based upsampling of the available input rays which effectively resolves known aliasing issues like stitching artefacts. The required rays are generated on the fly to perfectly match the desired output resolution, even for small numbers of input images. In addition, the upsampling is real-time and enables direct interactive control over the desired stereoscopic depth effect. In combination, our contributions allow the generation of stereoscopic panoramas at high output resolutions that are virtually free of artefacts such as seams, stereo discontinuities, vertical parallax and other mono-/stereoscopic shape distortions. Our process is robust, and other types of multiperspective panoramas, such as linear panoramas, can also benefit from our contributions. We show various comparisons and high-resolution results.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Our second contribution is a flow-based upsampling of the available input rays which effectively resolves known aliasing issues like stitching artefacts. [sent-4, score-0.656]

2 The required rays are generated on the fly to perfectly match the desired output resolution, even for small numbers of input images. [sent-5, score-0.312]

3 In addition, the upsampling is real-time and enables direct interactive control over the desired stereoscopic depth effect. [sent-6, score-0.344]

4 In combination, our contributions allow the generation of stereoscopic panoramas at high output resolutions that are virtually free of artefacts such as seams, stereo discontinuities, vertical parallax and other mono-/stereoscopic shape distortions. [sent-7, score-1.377]

5 A great way of capturing environmental content are panoramas (see Figure 1). [sent-12, score-0.328]

6 Nowadays, automatic tools for stitching panoramas from multiple images are easily available, even in consumer cameras. [sent-13, score-0.637]

7 For circular 360° panoramas, one usually assumes a common camera centre for all images to minimise stitching artefacts due to the motion parallax between the images [2, 23]. [sent-14, score-1.051]

8 However, such panoramas inherently lack parallax and therefore cannot be experienced stereoscopically. [sent-16, score-0.565]

9 Top: stereoscopic panorama created using our system (red-cyan anaglyph). [sent-19, score-0.493]

10 Middle: close-ups of stitching seams (left; illustrated in 2D) and vertical parallax (right) visible with previous methods. [sent-20, score-0.854]

11 Motion parallax is explicitly captured by taking images with varying centres of projection, e. [sent-24, score-0.323]

12 A stereoscopic panorama can then be created by stitching specific strips from the input views. [sent-27, score-0.917]

13 While successfully introducing parallax, this strategy suffers from a number of unresolved practical issues that cause disturbing artefacts such as visible seams or vertical parallax in the panorama (see Figure 1). [sent-28, score-1.19]

14 The principal reason for these artefacts is that in practice our camera can capture light rays only at quite a limited spatio-angular resolution, i. [sent-30, score-0.589]

15 This insufficiently-dense sampling of the scene manifests itself as visible discontinuities in the output panorama in areas where neighbouring output pixels have been synthesised from different input views with strong 1 1 12 2 25 5 546 4 parallax between them. [sent-33, score-0.799]

16 Secondly, while the optical centres of all camera positions should ideally lie on a perfect circle [15], in practical acquisition scenarios and especially for hand-held capture, this is nearly impossible to achieve. [sent-35, score-0.318]

17 Specifically, we describe robust solutions to correct the input views which resolve issues caused by perspective distortion and vertical parallax, obtaining an optimal alignment of the input images with minimal drift. [sent-38, score-0.321]

18 Our resulting representation is compatible with previous panorama stitching and mosaicing approaches. [sent-39, score-0.671]

19 Secondly, we analyse typical aliasing artefacts known from previous approaches that lead to visible seams caused by the truncations and duplications of objects. [sent-40, score-0.632]

20 As a solution, we describe how to upsample the set of captured and corrected light rays using optical-flow-based interpolation techniques, effectively achieving a continuous representation of the ray space required for omnistereo panorama generation. [sent-41, score-0.849]

21 By sampling from this representation, we are able to produce megapixel stereoscopic panoramas without artefacts such as seams or vertical parallax, and with real-time control over the resulting stereo effect. [sent-42, score-1.185]

22 As demonstrated in the results, our contributions resolve central issues of existing techniques for both stereo- and monoscopic panorama generation, as well as for any multiperspective imaging method based on stitching, like x-slit [30], pushbroom [5] or general linear cameras [26]. [sent-43, score-0.624]

23 Related work Standard panoramas capture a wide field of view of a scene as seen from a single centre of projection. [sent-45, score-0.452]

24 Most commonly, they are created by stitching multiple photos into one mosaic; see Szeliski [23] for a detailed survey. [sent-46, score-0.309]

25 However, these approaches cannot capture both a 360◦ view of the scene and stereoscopic depth. [sent-50, score-0.265]

26 Consequently, this idea was extended to panoramas by moving the camera along a circular trajectory with either a tangential [20] or orthogonal camera viewing direction [15], the latter being known as omnistereo panoramas. [sent-51, score-1.061]

27 Before stitching the images into any kind of panorama, one first needs to align them relative to each other, which amounts to estimation of the camera motion. [sent-55, score-0.432]

28 These methods hence lead to artefacts if the scene has a complex depth structure. [sent-60, score-0.311]

29 To solve this problem, one can estimate the scene depth of the panorama [12], and use this information to compute the ego motion of the camera [16, 17, 28], i. [sent-61, score-0.454]

30 We show how to adapt similar techniques to achieve highly accurate alignment for omnistereo panoramas. [sent-68, score-0.343]

31 A major problem when stitching multi-perspective images is that parallax leads to disturbing seams, i. [sent-70, score-0.589]

32 One way to alleviate this problem is to blend between the strips using strategies like simple linear (alpha), pyramid-based [3], or gradient-domain blending [11]. [sent-73, score-0.286]

33 More importantly, however, is that in the context of omnistereo panoramas, we need concise control over the resulting output parallax in order to achieve proper stereo- scopic viewing. [sent-76, score-0.552]

34 While the above blending approaches might be applicable for monoscopic stitching, in stereoscopic 3D the resulting inconsistencies can become unacceptable [8]. [sent-77, score-0.397]

35 1 1 12 2 25 5 5 7 5 To rectify seams in a more principled way, previous work [16, 19] proposed to use more images to get a denser angular sampling of the scene, resulting in thinner strips and smaller discontinuities. [sent-78, score-0.373]

36 Furthermore, these methods are prone to give artefacts for thin vertical structures as they are often missed in depth or flow estimates. [sent-81, score-0.521]

37 Optical flow was also used for improving hand-held capture of 2D panoramas [9, 21], to remove the (in this case) undesired motion parallax. [sent-82, score-0.45]

38 We describe an optical-flow-based ray upsampling that works on the fly and is specifically tailored to our context of efficiently creating high-quality, high-resolution stereo panoramas. [sent-83, score-0.288]

39 Method overview The fundamental principle behind omnistereo panorama generation, as introduced by Peleg et al. [sent-85, score-0.574]

40 In practice, one usually captures an image sequence with a camera moving along a circular trajectory with its principal axis parallel to the plane spanned by the camera trajectory (see Figure 2). [sent-87, score-0.402]

41 An omnistereo panorama can then be created by stitching, for each eye, specific vertical strips from the aligned images, such that the above ray geometry is approximated. [sent-88, score-0.886]

42 This approximation to the desired ray geometry typically suffers from inaccuracies of the capture setup and limited angular sampling, resulting in the previously mentioned artefacts such as vertical parallax (see Figure 1). [sent-89, score-0.893]

43 In Section 4, we describe a specific transformation and alignment approach employing camera orientation correction, cylindrical image re-projection, and optimised homography matching that overcomes those issues. [sent-90, score-0.309]

44 This further deteriorates the approximation quality to the actually required set of rays and leads to aliasing artefacts (seams, truncation, duplication). [sent-93, score-0.527]

45 In Section 5, we present a solution using flow-based stitching that resolves those problems. [sent-94, score-0.309]

46 Optical undistortion to pinhole model For capturing stereoscopic panoramas, it is generally benefi- cial to use wide-angle lenses to capture images with significant overlap and a large vertical field of view. [sent-100, score-0.478]

47 Due to the optical distortion introduced by those lenses, the first crucial step to approximate the desired ray geometry is to convert those images such that they correspond to a pinhole camera model. [sent-101, score-0.387]

48 Correction of camera orientations Any deviation from the previously mentioned ideal capture setup (circular trajectory and coplanar principal axes) leads to visible shape distortions in a stereoscopic output panorama (e. [sent-105, score-0.794]

49 However, using a more general approach enables us to create omnistereo panoramas also from hand-held input as well as from more general camera trajectories like a straight line. [sent-111, score-0.757]

50 The goal is now to rotate each camera coordinate frame × towards the idealised setup with a common up-direction and viewing directions that are perpendicular to the camera trajectory. [sent-113, score-0.333]

51 For omnistereo panoramas, we obtain u by fitting a circle to all camera centres and using the normal vector n of the plane the circle lies in: u= n. [sent-124, score-0.614]

52 linear as for pushbroom panoramas), we compute u as the mean up direction of the individual camera coordinate systems. [sent-127, score-0.284]

53 The mean up direction can also be used to disambiguate the normal direction in the omnistereo case, by choosing the direction that is closer to the mean up direction. [sent-128, score-0.373]

54 Correction of the camera orientations removes perspective distortion, and cylindrical projection compensates for vertical parallax. [sent-147, score-0.305]

55 Vertical parallax compensation The next issue is that in the standard pinhole model with a planar imaging surface, objects near the image border are larger than near the image centre. [sent-150, score-0.278]

56 As a consequence, for non-linear camera trajectories, motion parallax between two input views consists of a horizontal as well as a vertical component. [sent-151, score-0.519]

57 While the horizontal component is desirable for constructing a stereoscopic output image, the vertical component has to be eliminated in order to allow for proper stereoscopic viewing [8]. [sent-152, score-0.599]

58 We define the cylinder to be concentric to the circle fitted to the camera centres computed in the previous section, with the cylinder’s axis orthogonal to the circle plane and a specific radius. [sent-154, score-0.429]

59 To efficiently project each image onto this cylinder, we first establish a pixel grid on the cylinder at the desired output resolution and then project each pixel onto the pinhole camera’s imaging plane to sample the corresponding output colour. [sent-158, score-0.334]

60 Specifically, we approximate the extent of the image on the cylinder by tracing rays from the image border through the camera centre and intersecting them with the cylinder. [sent-159, score-0.428]

61 Compact representation via 2D alignment At this point, the re-oriented, parallax-compensated input views are in principle available for synthesising an omnistereo panorama. [sent-162, score-0.383]

62 However, the current representation with the images projected onto the cylindrical surface is nonstandard compared to other panorama stitching approaches and requires extra bookkeeping about the locations of the images. [sent-163, score-0.669]

63 We therefore project the images back into a planar 2D setting that is compatible with previous methods for panorama generation (hence they can directly benefit from our corrections) and simplifies the following stitching process. [sent-165, score-0.646]

64 For each pair of consecutive images, we leverage the reconstructed camera geometry to calculate the homography induced by a plane tangent to the cylinder halfway between the two camera centres in order to minimise distortions. [sent-167, score-0.543]

65 For general camera trajectories, we instead position the plane at a fixed distance (see previous section) in front of the midpoint of the two camera centres, with the plane normal halfway between the viewing directions of the two cameras. [sent-168, score-0.39]

66 Flow-based panorama synthesis Given the aligned images, the basic approach of stitching an omnistereo panorama [15] is to extract specific strips from 1 1 12 2 25 5 579 7 artefacts caused by the aliasing. [sent-227, score-1.572]

67 Rotational drift (roll) between both ends of a panorama, and remaining vertical drift after cancellation of rational drift, for image-based (IB) and our alignment with virtually no drift. [sent-242, score-0.361]

68 each image – dependent on the desired stereoscopic output disparities and to combine them into a left and right output view. [sent-243, score-0.317]

69 The omnistereo effect is achieved by collecting rays that are all tangent to a common viewing circle (Figure 4a). [sent-244, score-0.533]

70 The correct sampling of input rays for the generation of a stereo panorama fundamentally depends on the targeted output resolution. [sent-245, score-0.603]

71 Ideally, we would like to sample strips from the input views that project to less than a pixel’s width in the output, to avoid aliasing artefacts and deviation of rays from the desired ray projection. [sent-246, score-0.805]

72 The deviation angles are defined as the angular difference between the ideal ray and the ray that is used for stitching (angles α1, α2 in Figure 4b). [sent-247, score-0.571]

73 With a coarser angular resolution the deviation angles grow (approximately inversely proportional to the angular resolution) and stitching artefacts such as discontinuities, duplications, and truncations of objects become apparent, as visible in Figure 1. [sent-251, score-0.92]

74 – To mitigate these artefacts one generally employs some form of smooth blending [3, 11]. [sent-252, score-0.448]

75 However, as demonstrated in Figure 5, such blending approaches may obscure these artefacts to some extent, but do not address the problem at its core. [sent-253, score-0.448]

76 However, a panorama in the order of 10 megapixels leads to an output width of about 7000 pixels for HD 720p input images. [sent-255, score-0.384]

77 hCboumrpaLinsoe arfdintbP ylearm di -nb agsemdthoFal nw d-bathsedir(o eu fsr)ct on typical seam artefacts encountered in stereo panorama stitching. [sent-257, score-0.644]

78 In the following, we first analyse the expected aliasing artefacts and seams, and then describe a flow-based interpolation approach to upsample the available rays on the fly to match the required output resolution while resolving the visual artefacts efficiently and effectively. [sent-261, score-1.043]

79 Filling the strip EG in the panorama will in general require additional nearby rays to compensate for the relatively coarse angular resolution between input images. [sent-267, score-0.658]

80 On the other hand, objects in the section AB at distance dnear appear truncated in the final panorama (see Figure 1, left closeup). [sent-276, score-0.297]

81 1 1 12 2 25 56 80 8 P P P P P Pa a a a a n n n n n o o o o o r r ar ar ar a a a m m m m m m ma a a a a N N N N N N N Ne e e e e e et t t tfl fl fl fl f l f l f lo lo lo lo lo o o ow w w w w w w w Figure 6. [sent-277, score-0.288]

82 We resolve these aliasing artefacts by generating the missing rays using optical flow-based upsampling, as illustrated in Figure 4d. [sent-279, score-0.617]

83 To avoid stitching artefacts, we interpolate the intermediate point p˜ between p? [sent-292, score-0.309]

84 thesising missing rays at the virtual camera location I? [sent-296, score-0.281]

85 Flow-based blending To implement the above idea, we require pairwise optical flow [29]. [sent-303, score-0.323]

86 With this, the desired interpolation to synthesise inbetween light rays on the fly is achieved by warping corresponding pixels by a fraction of the flow, depending on the horizontal angular interpolation factor η between two images. [sent-318, score-0.429]

87 Image-based alignment (left) is compromised by scene depth even after correcting for rotational and vertical drift, while our approach produces straight panoramas (right). [sent-320, score-0.618]

88 After the preprocessing, we can stitch the panoramas in real-time at full screen resolution, which gives the user the freedom to adapt important stereo viewing parameters like interaxial distance or vergence on the fly; please see the supplementary video. [sent-334, score-0.515]

89 To further emphasise the importance of our correction steps, we captured a dataset with large parallax caused by a person close to the camera (see Figure 8). [sent-344, score-0.434]

90 As illustrated in Figures 1 and 5, previous techniques like linear or pyramid-based blending [3] basically only try to hide the seam artefacts and thus do not give satisfactory results in stereoscopic panoramas featuring significant parallax. [sent-347, score-0.972]

91 Better results are expected if one tackles the under-sampling directly by using depth or optical flow to synthesise novel views [16, 19]. [sent-348, score-0.275]

92 Furthermore, they simply paste synthesised pixels, which leads to visible artefacts in places where the depth computation failed, e. [sent-350, score-0.409]

93 In this case, however, our solution automatically reduces to linear blending, which at these small locations allows to remedy the stitching problems and degrades gracefully. [sent-354, score-0.309]

94 Conclusion We demonstrated a solution for creating high-quality stereo panoramas at megapixel resolution. [sent-361, score-0.441]

95 h t h Ourflow-basedni O Ogu ur c n o n em et fp fl oa ow r w edtohp-bas sNrateeobhgieurn stitching of Rav-Acha et al. [sent-364, score-0.355]

96 techniques to correct the camera orientations, remove undesired vertical parallax and to obtain a compact representation. [sent-367, score-0.479]

97 Secondly, we use optical flow to upsample the angular input resolution to generate the optimal number of rays for a given output resolution on the fly, effectively resolving aliasing. [sent-368, score-0.632]

98 In combination, our contributions allow for the first time to practically and robustly create high-resolution stereo panoramas that are virtually free from artefacts like seams or vertical parallax. [sent-369, score-0.975]

99 We demonstrated that our contributions generalise to other multi-perspective stitching techniques like pushbroom images. [sent-371, score-0.468]

100 Top and left: Hand-held omnistereo panoramas captured by us, with 7 and 3 megapixels, respectively. [sent-378, score-0.605]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('panoramas', 0.328), ('stitching', 0.309), ('panorama', 0.297), ('artefacts', 0.277), ('omnistereo', 0.277), ('parallax', 0.237), ('stereoscopic', 0.196), ('blending', 0.171), ('rays', 0.158), ('seams', 0.152), ('pushbroom', 0.129), ('camera', 0.123), ('vertical', 0.119), ('strips', 0.115), ('multiperspective', 0.111), ('angular', 0.106), ('cylinder', 0.092), ('aliasing', 0.092), ('flow', 0.091), ('centres', 0.086), ('peleg', 0.079), ('ray', 0.078), ('correction', 0.074), ('fly', 0.071), ('stereo', 0.07), ('upsampling', 0.069), ('alignment', 0.066), ('mosaicing', 0.065), ('cylindrical', 0.063), ('optical', 0.061), ('synthesised', 0.061), ('homography', 0.057), ('centre', 0.055), ('fk', 0.055), ('drift', 0.053), ('discontinuities', 0.051), ('viewing', 0.05), ('circular', 0.05), ('kx', 0.05), ('synthesise', 0.049), ('megapixels', 0.049), ('strip', 0.049), ('resolution', 0.048), ('circle', 0.048), ('fl', 0.046), ('undistortion', 0.046), ('lenses', 0.045), ('desired', 0.045), ('resolving', 0.043), ('disturbing', 0.043), ('megapixel', 0.043), ('stitched', 0.043), ('resolutions', 0.043), ('rotational', 0.042), ('tangential', 0.041), ('rational', 0.041), ('zimmer', 0.041), ('il', 0.041), ('pinhole', 0.041), ('generation', 0.04), ('views', 0.04), ('upsample', 0.039), ('distortion', 0.039), ('view', 0.038), ('output', 0.038), ('trajectory', 0.037), ('dfar', 0.037), ('duplications', 0.037), ('idealised', 0.037), ('interaxial', 0.037), ('misperceptions', 0.037), ('refaim', 0.037), ('truncations', 0.037), ('visible', 0.037), ('shum', 0.037), ('ik', 0.035), ('lens', 0.035), ('distortions', 0.035), ('panoramic', 0.035), ('depth', 0.034), ('hd', 0.033), ('plane', 0.032), ('direction', 0.032), ('net', 0.031), ('capture', 0.031), ('mosaics', 0.03), ('generalise', 0.03), ('monoscopic', 0.03), ('vergence', 0.03), ('halfway', 0.03), ('lo', 0.03), ('lx', 0.03), ('street', 0.03), ('sfm', 0.029), ('virtually', 0.029), ('resolve', 0.029), ('purely', 0.029), ('straight', 0.029), ('zomet', 0.029), ('issues', 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000008 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas

Author: Christian Richardt, Yael Pritch, Henning Zimmer, Alexander Sorkine-Hornung

Abstract: We present a solution for generating high-quality stereo panoramas at megapixel resolutions. While previous approaches introduced the basic principles, we show that those techniques do not generalise well to today’s high image resolutions and lead to disturbing visual artefacts. As our first contribution, we describe the necessary correction steps and a compact representation for the input images in order to achieve a highly accurate approximation to the required ray space. Our second contribution is a flow-based upsampling of the available input rays which effectively resolves known aliasing issues like stitching artefacts. The required rays are generated on the fly to perfectly match the desired output resolution, even for small numbers of input images. In addition, the upsampling is real-time and enables direct interactive control over the desired stereoscopic depth effect. In combination, our contributions allow the generation of stereoscopic panoramas at high output resolutions that are virtually free of artefacts such as seams, stereo discontinuities, vertical parallax and other mono-/stereoscopic shape distortions. Our process is robust, and other types of multiperspective panoramas, such as linear panoramas, can also benefit from our contributions. We show various comparisons and high-resolution results.

2 0.16173846 47 cvpr-2013-As-Projective-As-Possible Image Stitching with Moving DLT

Author: Julio Zaragoza, Tat-Jun Chin, Michael S. Brown, David Suter

Abstract: We investigate projective estimation under model inadequacies, i.e., when the underpinning assumptions oftheprojective model are not fully satisfied by the data. We focus on the task of image stitching which is customarily solved by estimating a projective warp — a model that is justified when the scene is planar or when the views differ purely by rotation. Such conditions are easily violated in practice, and this yields stitching results with ghosting artefacts that necessitate the usage of deghosting algorithms. To this end we propose as-projective-as-possible warps, i.e., warps that aim to be globally projective, yet allow local non-projective deviations to account for violations to the assumed imaging conditions. Based on a novel estimation technique called Moving Direct Linear Transformation (Moving DLT), our method seamlessly bridges image regions that are inconsistent with the projective model. The result is highly accurate image stitching, with significantly reduced ghosting effects, thus lowering the dependency on post hoc deghosting.

3 0.15892604 177 cvpr-2013-FrameBreak: Dramatic Image Extrapolation by Guided Shift-Maps

Author: Yinda Zhang, Jianxiong Xiao, James Hays, Ping Tan

Abstract: We significantly extrapolate the field of view of a photograph by learning from a roughly aligned, wide-angle guide image of the same scene category. Our method can extrapolate typical photos into complete panoramas. The extrapolation problem is formulated in the shift-map image synthesis framework. We analyze the self-similarity of the guide image to generate a set of allowable local transformations and apply them to the input image. Our guided shift-map method preserves to the scene layout of the guide image when extrapolating a photograph. While conventional shiftmap methods only support translations, this is not expressive enough to characterize the self-similarity of complex scenes. Therefore we additionally allow image transformations of rotation, scaling and reflection. To handle this in- crease in complexity, we introduce a hierarchical graph optimization method to choose the optimal transformation at each output pixel. We demonstrate our approach on a variety of indoor, outdoor, natural, and man-made scenes.

4 0.1322051 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello

Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.

5 0.11391145 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

Author: Sven Wanner, Christoph Straehle, Bastian Goldluecke

Abstract: Wepresent thefirst variationalframeworkfor multi-label segmentation on the ray space of 4D light fields. For traditional segmentation of single images, , features need to be extractedfrom the 2Dprojection ofa three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light fields available online with complete depth and segmentation ground truth data where available, and thus establish the first benchmark data set for light field analysis to facilitate competitive further development of algorithms.

6 0.1025258 124 cvpr-2013-Determining Motion Directly from Normal Flows Upon the Use of a Spherical Eye Platform

7 0.10046759 317 cvpr-2013-Optimal Geometric Fitting under the Truncated L2-Norm

8 0.096466124 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

9 0.092075117 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

10 0.089762308 232 cvpr-2013-Joint Geodesic Upsampling of Depth Images

11 0.089711688 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

12 0.087222986 244 cvpr-2013-Large Displacement Optical Flow from Nearest Neighbor Fields

13 0.086022742 362 cvpr-2013-Robust Monocular Epipolar Flow Estimation

14 0.084956542 345 cvpr-2013-Real-Time Model-Based Rigid Object Pose Estimation and Tracking Combining Dense and Sparse Visual Cues

15 0.080260143 344 cvpr-2013-Radial Distortion Self-Calibration

16 0.078462094 59 cvpr-2013-Better Exploiting Motion for Better Action Recognition

17 0.077335767 111 cvpr-2013-Dense Reconstruction Using 3D Object Shape Priors

18 0.074031278 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

19 0.073388621 170 cvpr-2013-Fast Rigid Motion Segmentation via Incrementally-Complex Local Models

20 0.07294745 117 cvpr-2013-Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.148), (1, 0.158), (2, 0.011), (3, 0.018), (4, -0.025), (5, -0.051), (6, -0.025), (7, -0.033), (8, 0.021), (9, 0.044), (10, 0.001), (11, 0.109), (12, 0.123), (13, -0.039), (14, -0.033), (15, 0.006), (16, 0.02), (17, 0.004), (18, -0.007), (19, 0.02), (20, 0.031), (21, -0.036), (22, 0.013), (23, -0.058), (24, 0.019), (25, -0.017), (26, 0.012), (27, -0.002), (28, 0.008), (29, 0.009), (30, -0.017), (31, -0.008), (32, 0.004), (33, 0.047), (34, 0.009), (35, -0.034), (36, -0.048), (37, 0.035), (38, -0.07), (39, 0.079), (40, -0.009), (41, -0.004), (42, -0.0), (43, -0.035), (44, 0.009), (45, -0.035), (46, -0.009), (47, -0.08), (48, -0.013), (49, 0.046)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94465566 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas

Author: Christian Richardt, Yael Pritch, Henning Zimmer, Alexander Sorkine-Hornung

Abstract: We present a solution for generating high-quality stereo panoramas at megapixel resolutions. While previous approaches introduced the basic principles, we show that those techniques do not generalise well to today’s high image resolutions and lead to disturbing visual artefacts. As our first contribution, we describe the necessary correction steps and a compact representation for the input images in order to achieve a highly accurate approximation to the required ray space. Our second contribution is a flow-based upsampling of the available input rays which effectively resolves known aliasing issues like stitching artefacts. The required rays are generated on the fly to perfectly match the desired output resolution, even for small numbers of input images. In addition, the upsampling is real-time and enables direct interactive control over the desired stereoscopic depth effect. In combination, our contributions allow the generation of stereoscopic panoramas at high output resolutions that are virtually free of artefacts such as seams, stereo discontinuities, vertical parallax and other mono-/stereoscopic shape distortions. Our process is robust, and other types of multiperspective panoramas, such as linear panoramas, can also benefit from our contributions. We show various comparisons and high-resolution results.

2 0.77135396 344 cvpr-2013-Radial Distortion Self-Calibration

Author: José Henrique Brito, Roland Angst, Kevin Köser, Marc Pollefeys

Abstract: In cameras with radial distortion, straight lines in space are in general mapped to curves in the image. Although epipolar geometry also gets distorted, there is a set of special epipolar lines that remain straight, namely those that go through the distortion center. By finding these straight epipolar lines in camera pairs we can obtain constraints on the distortion center(s) without any calibration object or plumbline assumptions in the scene. Although this holds for all radial distortion models we conceptually prove this idea using the division distortion model and the radial fundamental matrix which allow for a very simple closed form solution of the distortion center from two views (same distortion) or three views (different distortions). The non-iterative nature of our approach makes it immune to local minima and allows finding the distortion center also for cropped images or those where no good prior exists. Besides this, we give comprehensive relations between different undistortion models and discuss advantages and drawbacks.

3 0.75381911 102 cvpr-2013-Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras

Author: Donald G. Dansereau, Oscar Pizarro, Stefan B. Williams

Abstract: Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedurefor lenselet-basedplenoptic cameras appropriatefor a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.

4 0.75118345 337 cvpr-2013-Principal Observation Ray Calibration for Tiled-Lens-Array Integral Imaging Display

Author: Weiming Li, Haitao Wang, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Xing Mei, Tao Hong, Hoyoung Lee, Jiyeun Kim

Abstract: Integral imaging display (IID) is a promising technology to provide realistic 3D image without glasses. To achieve a large screen IID with a reasonable fabrication cost, a potential solution is a tiled-lens-array IID (TLA-IID). However, TLA-IIDs are subject to 3D image artifacts when there are even slight misalignments between the lens arrays. This work aims at compensating these artifacts by calibrating the lens array poses with a camera and including them in a ray model used for rendering the 3D image. Since the lens arrays are transparent, this task is challenging for traditional calibration methods. In this paper, we propose a novel calibration method based on defining a set of principle observation rays that pass lens centers of the TLA and the camera ’s optical center. The method is able to determine the lens array poses with only one camera at an arbitrary unknown position without using any additional markers. The principle observation rays are automatically extracted using a structured light based method from a dense correspondence map between the displayed and captured . pixels. . com, Experiments show that lens array misalignments xme i nlpr . ia . ac . cn @ can be estimated with a standard deviation smaller than 0.4 pixels. Based on this, 3D image artifacts are shown to be effectively removed in a test TLA-IID with challenging misalignments.

5 0.71352273 76 cvpr-2013-Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

Author: Filippo Bergamasco, Andrea Albarelli, Emanuele Rodolà, Andrea Torsello

Abstract: Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we dai s .unive . it ence points in the scene with their projections on the image plane [5]. Unfortunately, no real camera behaves exactly like an ideal pinhole. In fact, in most cases, at least the distortion effects introduced by the lens should be accounted for [19]. Any pinhole-based model, regardless of its level of sophistication, is geometrically unable to properly describe cameras exhibiting a frustum angle that is near or above 180 degrees. For wide-angle cameras, several different para- metric models have been proposed. Some of them try to modify the captured image in order to follow the original propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasipinhole central cameras is supported by an extensive experimental validation.

6 0.70736241 290 cvpr-2013-Motion Estimation for Self-Driving Cars with a Generalized Camera

7 0.69970983 279 cvpr-2013-Manhattan Scene Understanding via XSlit Imaging

8 0.6980648 368 cvpr-2013-Rolling Shutter Camera Calibration

9 0.67878252 47 cvpr-2013-As-Projective-As-Possible Image Stitching with Moving DLT

10 0.66960937 349 cvpr-2013-Reconstructing Gas Flows Using Light-Path Approximation

11 0.66442949 124 cvpr-2013-Determining Motion Directly from Normal Flows Upon the Use of a Spherical Eye Platform

12 0.65713584 84 cvpr-2013-Cloud Motion as a Calibration Cue

13 0.64912087 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

14 0.64753139 195 cvpr-2013-HDR Deghosting: How to Deal with Saturation?

15 0.63973898 37 cvpr-2013-Adherent Raindrop Detection and Removal in Video

16 0.62496203 176 cvpr-2013-Five Shades of Grey for Fast and Reliable Camera Pose Estimation

17 0.61965835 428 cvpr-2013-The Episolar Constraint: Monocular Shape from Shadow Correspondence

18 0.60945988 333 cvpr-2013-Plane-Based Content Preserving Warps for Video Stabilization

19 0.60721427 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields

20 0.57903433 362 cvpr-2013-Robust Monocular Epipolar Flow Estimation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(6, 0.265), (10, 0.152), (16, 0.039), (26, 0.043), (28, 0.015), (33, 0.22), (67, 0.026), (69, 0.029), (87, 0.103)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.8223846 283 cvpr-2013-Megastereo: Constructing High-Resolution Stereo Panoramas

Author: Christian Richardt, Yael Pritch, Henning Zimmer, Alexander Sorkine-Hornung

Abstract: We present a solution for generating high-quality stereo panoramas at megapixel resolutions. While previous approaches introduced the basic principles, we show that those techniques do not generalise well to today’s high image resolutions and lead to disturbing visual artefacts. As our first contribution, we describe the necessary correction steps and a compact representation for the input images in order to achieve a highly accurate approximation to the required ray space. Our second contribution is a flow-based upsampling of the available input rays which effectively resolves known aliasing issues like stitching artefacts. The required rays are generated on the fly to perfectly match the desired output resolution, even for small numbers of input images. In addition, the upsampling is real-time and enables direct interactive control over the desired stereoscopic depth effect. In combination, our contributions allow the generation of stereoscopic panoramas at high output resolutions that are virtually free of artefacts such as seams, stereo discontinuities, vertical parallax and other mono-/stereoscopic shape distortions. Our process is robust, and other types of multiperspective panoramas, such as linear panoramas, can also benefit from our contributions. We show various comparisons and high-resolution results.

2 0.77675682 32 cvpr-2013-Action Recognition by Hierarchical Sequence Summarization

Author: Yale Song, Louis-Philippe Morency, Randall Davis

Abstract: Recent progress has shown that learning from hierarchical feature representations leads to improvements in various computer vision tasks. Motivated by the observation that human activity data contains information at various temporal resolutions, we present a hierarchical sequence summarization approach for action recognition that learns multiple layers of discriminative feature representations at different temporal granularities. We build up a hierarchy dynamically and recursively by alternating sequence learning and sequence summarization. For sequence learning we use CRFs with latent variables to learn hidden spatiotemporal dynamics; for sequence summarization we group observations that have similar semantic meaning in the latent space. For each layer we learn an abstract feature representation through non-linear gate functions. This procedure is repeated to obtain a hierarchical sequence summary representation. We develop an efficient learning method to train our model and show that its complexity grows sublinearly with the size of the hierarchy. Experimental results show the effectiveness of our approach, achieving the best published results on the ArmGesture and Canal9 datasets.

3 0.73168433 168 cvpr-2013-Fast Object Detection with Entropy-Driven Evaluation

Author: Raphael Sznitman, Carlos Becker, François Fleuret, Pascal Fua

Abstract: Cascade-style approaches to implementing ensemble classifiers can deliver significant speed-ups at test time. While highly effective, they remain challenging to tune and their overall performance depends on the availability of large validation sets to estimate rejection thresholds. These characteristics are often prohibitive and thus limit their applicability. We introduce an alternative approach to speeding-up classifier evaluation which overcomes these limitations. It involves maintaining a probability estimate of the class label at each intermediary response and stopping when the corresponding uncertainty becomes small enough. As a result, the evaluation terminates early based on the sequence of responses observed. Furthermore, it does so independently of the type of ensemble classifier used or the way it was trained. We show through extensive experimentation that our method provides 2 to 10 fold speed-ups, over existing state-of-the-art methods, at almost no loss in accuracy on a number of object classification tasks.

4 0.71687567 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

Author: Horst Possegger, Sabine Sternig, Thomas Mauthner, Peter M. Roth, Horst Bischof

Abstract: Combining foreground images from multiple views by projecting them onto a common ground-plane has been recently applied within many multi-object tracking approaches. These planar projections introduce severe artifacts and constrain most approaches to objects moving on a common 2D ground-plane. To overcome these limitations, we introduce the concept of an occupancy volume exploiting the full geometry and the objects ’ center of mass and develop an efficient algorithm for 3D object tracking. Individual objects are tracked using the local mass density scores within a particle filter based approach, constrained by a Voronoi partitioning between nearby trackers. Our method benefits from the geometric knowledge given by the occupancy volume to robustly extract features and train classifiers on-demand, when volumetric information becomes unreliable. We evaluate our approach on several challenging real-world scenarios including the public APIDIS dataset. Experimental evaluations demonstrate significant improvements compared to state-of-theart methods, while achieving real-time performance. – –

5 0.71677816 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

Author: Amit Agrawal, Srikumar Ramalingam

Abstract: Imaging systems consisting of a camera looking at multiple spherical mirrors (reflection) or multiple refractive spheres (refraction) have been used for wide-angle imaging applications. We describe such setups as multi-axial imaging systems, since a single sphere results in an axial system. Assuming an internally calibrated camera, calibration of such multi-axial systems involves estimating the sphere radii and locations in the camera coordinate system. However, previous calibration approaches require manual intervention or constrained setups. We present a fully automatic approach using a single photo of a 2D calibration grid. The pose of the calibration grid is assumed to be unknown and is also recovered. Our approach can handle unconstrained setups, where the mirrors/refractive balls can be arranged in any fashion, not necessarily on a grid. The axial nature of rays allows us to compute the axis of each sphere separately. We then show that by choosing rays from two or more spheres, the unknown pose of the calibration grid can be obtained linearly and independently of sphere radii and locations. Knowing the pose, we derive analytical solutions for obtaining the sphere radius and location. This leads to an interesting result that 6-DOF pose estimation of a multi-axial camera can be done without the knowledge of full calibration. Simulations and real experiments demonstrate the applicability of our algorithm.

6 0.71676433 393 cvpr-2013-Separating Signal from Noise Using Patch Recurrence across Scales

7 0.71355224 447 cvpr-2013-Underwater Camera Calibration Using Wavelength Triangulation

8 0.71351004 298 cvpr-2013-Multi-scale Curve Detection on Surfaces

9 0.71338254 248 cvpr-2013-Learning Collections of Part Models for Object Recognition

10 0.71269357 19 cvpr-2013-A Minimum Error Vanishing Point Detection Approach for Uncalibrated Monocular Images of Man-Made Environments

11 0.71268809 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path

12 0.71267545 303 cvpr-2013-Multi-view Photometric Stereo with Spatially Varying Isotropic Materials

13 0.71183807 227 cvpr-2013-Intrinsic Scene Properties from a Single RGB-D Image

14 0.7117421 143 cvpr-2013-Efficient Large-Scale Structured Learning

15 0.71172583 331 cvpr-2013-Physically Plausible 3D Scene Tracking: The Single Actor Hypothesis

16 0.71149474 406 cvpr-2013-Spatial Inference Machines

17 0.71106899 286 cvpr-2013-Mirror Surface Reconstruction from a Single Image

18 0.71101648 431 cvpr-2013-The Variational Structure of Disparity and Regularization of 4D Light Fields

19 0.71098661 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

20 0.71087641 188 cvpr-2013-Globally Consistent Multi-label Assignment on the Ray Space of 4D Light Fields