iccv iccv2013 iccv2013-226 knowledge-graph by maker-knowledge-mining

226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video


Source: pdf

Author: Feng Liu, Yuzhen Niu, Hailin Jin

Abstract: Shaky stereoscopic video is not only unpleasant to watch but may also cause 3D fatigue. Stabilizing the left and right view of a stereoscopic video separately using a monocular stabilization method tends to both introduce undesirable vertical disparities and damage horizontal disparities, which may destroy the stereoscopic viewing experience. In this paper, we present a joint subspace stabilization method for stereoscopic video. We prove that the low-rank subspace constraint for monocular video [10] also holds for stereoscopic video. Particularly, the feature trajectories from the left and right video share the same subspace. Based on this proof, we develop a stereo subspace stabilization method that jointly computes a common subspace from the left and right video and uses it to stabilize the two videos simultaneously. Our method meets the stereoscopic constraints without 3D reconstruction or explicit left-right correspondence. We test our method on a variety of stereoscopic videos with different scene content and camera motion. The experiments show that our method achieves high-quality stabilization for stereoscopic video in a robust and efficient way.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 com Abstract Shaky stereoscopic video is not only unpleasant to watch but may also cause 3D fatigue. [sent-8, score-0.744]

2 Stabilizing the left and right view of a stereoscopic video separately using a monocular stabilization method tends to both introduce undesirable vertical disparities and damage horizontal disparities, which may destroy the stereoscopic viewing experience. [sent-9, score-2.558]

3 In this paper, we present a joint subspace stabilization method for stereoscopic video. [sent-10, score-1.36]

4 We prove that the low-rank subspace constraint for monocular video [10] also holds for stereoscopic video. [sent-11, score-1.055]

5 Particularly, the feature trajectories from the left and right video share the same subspace. [sent-12, score-0.485]

6 Based on this proof, we develop a stereo subspace stabilization method that jointly computes a common subspace from the left and right video and uses it to stabilize the two videos simultaneously. [sent-13, score-1.362]

7 Our method meets the stereoscopic constraints without 3D reconstruction or explicit left-right correspondence. [sent-14, score-0.624]

8 We test our method on a variety of stereoscopic videos with different scene content and camera motion. [sent-15, score-0.718]

9 The experiments show that our method achieves high-quality stabilization for stereoscopic video in a robust and efficient way. [sent-16, score-1.232]

10 Thanks to the recent success of 3D movies, there is a resurgence of interests in stereoscopic video. [sent-19, score-0.565]

11 Nowadays we have capable hardware for displaying and capturing stereoscopic video. [sent-20, score-0.565]

12 However, the development in stereoscopic video processing is still in its infancy. [sent-21, score-0.695]

13 This paper addresses an important stereoscopic video processing problem, video stabilization. [sent-22, score-0.825]

14 Video stabilization is the problem of removing undesired camera shake from a video. [sent-23, score-0.627]

15 It has been shown that a good video stabilization algorithm can significantly improve the visual quality of an amateur video, making it close to the level of a professional one [14]. [sent-24, score-0.667]

16 Stabilization algorithms play an even more important role in stereoscopic video because the effect ofcamera shake is more pronounced in stereo [19]. [sent-25, score-0.786]

17 In particular, the temporal jitter in a shaky stereoscopic video can cause an excessive demand on the accommodation-vergence linkage [11]. [sent-26, score-0.8]

18 Applying a homography-based monocular stabilization method [20] to each view of a stereoscopic video is problematic as it often damages the original horizontal disparities and induces the vertical disparities. [sent-28, score-1.812]

19 Existing 3D reconstruction-based stabilization methods can be well extended to handle stereoscopic video [3, 5, 14]. [sent-30, score-1.232]

20 Once the 3D camera motion and scene structure are estimated, we can smooth the camera motion and synthesize a new pair of left and right video to follow the smooth camera motion. [sent-31, score-0.608]

21 In this work, we present a joint subspace stabilization method for stereoscopic video. [sent-34, score-1.36]

22 This method handles disparity problems in video stabilization without 3D reconstruction or even explicit left-right correspondence estimation. [sent-35, score-0.84]

23 According to Irani [10], the feature trajectories from a short monocular video lie in a low-rank subspace. [sent-36, score-0.446]

24 We extend this to stereoscopic video and prove that the feature trajectories from the left and right video of a stereoscopic video share a common subspace spanned by a small number of eigentrajectories. [sent-37, score-2.12]

25 In this way, no vertical disparities will be introduced and horizonal disparities will be smoothed. [sent-40, score-0.697]

26 These two are the key properties of a successful stereoscopic video stabilization method. [sent-41, score-1.232]

27 First, we prove that the low-rank subspace constraint for monocular video [10] also holds for stereo video with no increase in the rank. [sent-43, score-0.684]

28 By combining the trajectories from the left and right video, our method is robust to 73 the insufficiency of long trajectories and to the degeneration in scene content, camera motion, and tracking error. [sent-46, score-0.598]

29 Related Work Existing work on monocular video stabilization can be categorized into 2D and 3D reconstruction-based methods. [sent-48, score-0.764]

30 If they are independently applied to the left and right view of a stereo video, they can often introduce vertical disparities which may damage the stereoscopic viewing experience. [sent-51, score-1.207]

31 3D reconstruction-based video stabilization methods compute 3D models of the scene and use image-based rendering techniques to render the stabilized video [2, 3, 5, 14]. [sent-52, score-0.887]

32 They can achieve similar stabilization effects to the 3D reconstruction-based methods but retain most of the robustness and efficiency of the 2D methods. [sent-59, score-0.537]

33 Unlike 3D reconstruction-based methods, applying them independently to each view of a stereoscopic video often does not produce satisfying results as shown later on. [sent-60, score-0.718]

34 There is limited work on video stabilization beyond monocular cameras. [sent-62, score-0.764]

35 Their method uses similarity transformations and includes an additional term to account for the vertical disparities between left and right video. [sent-65, score-0.507]

36 It shares the same limitations of the 3D reconstruction-based stabilization methods in terms of robustness and efficiency. [sent-70, score-0.537]

37 Joint Subspace Video Stabilization Like monocular video stabilization methods, our method first tracks and smooths feature trajectories from input video, and then synthesizes stabilized video guided by the smooth feature trajectories. [sent-75, score-1.242]

38 Specifically, we first track feature trajectories by applying the KLT algorithm [21] to the left and right video separately. [sent-76, score-0.483]

39 The stabilization task is to obtain two sets of new feature trajectories { xˆiL (t) , yˆLi (t)} and { ˆxiR(t) , ˆy Ri(t)} that guide the rendering o{ xfˆ a pair of( stt)}ab ainlidze {d xˆ videos. [sent-80, score-0.783]

40 Below we first show that if an affine camera model is assumed, we can easily perform 3D reconstruction using matrix factorization for stereo video stabilization. [sent-87, score-0.468]

41 This shows that for an affine camera, the displacement matrix from a stereoscopic video can be factored into a camera matrix E and scene matrices CL and CR. [sent-134, score-0.976]

42 We can then perform stereoscopic video stabilization in the following steps. [sent-136, score-1.232]

43 Estimate feature trajectories from the left and right video separately and assemble a joint trajectory displacement matrix according to Equation 1. [sent-138, score-0.735]

44 Warp the left and right video guided by the smooth trajectories using content-preserving warping [14]. [sent-144, score-0.494]

45 This approach to stereoscopic video stabilization clearly will not bring in disparity problems as it only smoothes the motion of the stereo camera rig. [sent-145, score-1.49]

46 Perspective Stereo Video Stabilization We are inspired by the recent subspace video stabilization method for monocular video that handles a more general camera, i. [sent-149, score-1.108]

47 This monocular stabilization method was based on the subspace observation that the feature trajectories of a short video imaged by a perspective camera lie in a low-rank subspace [10]. [sent-152, score-1.535]

48 But it has been shown in [15], matrix E, called “eigen-trajectory matrix”, can be smoothed in the same way as the camera matrix for the affine camera, and the smooth trajectories can be obtained by composing matrix C and the smoothed eigen-trajectory matrix. [sent-155, score-0.532]

49 In the following, we first prove that the subspace constraint is also valid for stereoscopic video imaged by perspective cameras. [sent-156, score-1.019]

50 In fact, the trajectories from the left and right video share a common subspace. [sent-157, score-0.458]

51 Accordingly, we can apply a similar approach for stereoscopic video captured by affine cameras to those captured by perspective cameras. [sent-159, score-0.824]

52 nHo 2we ivse ar,n nit eisx interesting fto t see rthesaut ltthe frroe ims no increase in the rank when we go from a monocular video to a stereoscopic one. [sent-200, score-0.848]

53 Furthermore, we note that the left and right video share the same lowdimensional subspace E. [sent-201, score-0.48]

54 Rank 9 is also used in the monocular subspace stabilization method [15]. [sent-215, score-0.848]

55 1 Summary: joint subspace stabilization We now summarize our stereoscopic video stabilization method for perspective cameras as follows. [sent-218, score-2.118]

56 As we estimate the eigen-trajectory matrix E in Equation 9 jointly from the trajectories of the left and right video, we call our method joint subspace video stabilization. [sent-219, score-0.735]

57 Estimate feature trajectories from the left and right video separately and assemble a joint trajectory matrix as follows. [sent-221, score-0.676]

58 2 MˆR Disparity We now examine how this joint subspace stabilization method works on disparities in a stereoscopic video. [sent-237, score-1.664]

59 Particularly, if the input video has zero vertical disparities (assuming a rectified stereo rig and no noise in feature tracking), CRy = CLy. [sent-240, score-0.635]

60 Then the output trajectories will have no vertical disparities either. [sent-241, score-0.585]

61 Meanwhile, as horizontal disparities in a stereoscopic video typically change insignificantly temporally, smoothing will not change disparity magnitudes significantly. [sent-246, score-1.184]

62 separate subspace Equation 9 shows that the feature trajectories from the left and right video share the same subspace. [sent-250, score-0.755]

63 This actually implies that ideally, applying the monocular subspace stabilization method [15] to each video separately, called separate subspace stabilization, will lead to the same stabilization result as our joint subspace method. [sent-251, score-2.043]

64 However, separate × subspace stabilization often cannot work well in practice. [sent-252, score-0.807]

65 The subspaces separately estimated for the left and right video are often different when there are feature tracking errors or the set offeature trajectories for the two videos differ from each other significantly. [sent-253, score-0.608]

66 The discrepancy between the left and right stabilization results will then be introduced. [sent-254, score-0.651]

67 We performed a simulation to compare the joint subspace method with the separate subspace method. [sent-255, score-0.528]

68 One concern with the joint subspace method is the possibly increasing subspace fitting error as only one subspace is estimated instead of two. [sent-279, score-0.686]

69 We examined the fitting error for 20 stereoscopic videos, each of which has a frame size 640 360. [sent-280, score-0.613]

70 Experiments We tested our approach on a collection of57 stereoscopic videos, ranging from 10 to 80 seconds, captured by a variety of stereoscopic cameras in many different scenes. [sent-293, score-1.16]

71 We compared our joint subspace stabilization method against the separate subspace stabilization method of applying [15] independently to the left and right view. [sent-294, score-1.716]

72 We did not compare our method against any 2D or 3D stabilization methods because that would almost amount to comparing [15] against 2D or 3D methods which is already covered in [15]. [sent-296, score-0.537]

73 We consider a result unsuccessful if either the algorithm fails to process the video or the resulting stereoscopic video is uncomfortable to watch on a stereoscopic display. [sent-299, score-1.418]

74 In addition to the same two videos that failed our method, the separate subspace method failed on another 7 videos where the camera moves very quickly and there are not a sufficient number of long feature trajectories in at least one of the two views. [sent-303, score-0.79]

75 Since the subspace constraint is applied to each view independently, the feature trajectories between two views can often be smoothed in- consistently, which sometimes leads to vertical disparities, as described in Section 3. [sent-311, score-0.593]

76 We examined the vertical and horizontal disparities as well as their second derivatives in the output videos. [sent-319, score-0.51]

77 edu/ ˜f l iu/proj ect / j oint - subspace 78 (a) Input frame sequence (b) Separate subspace stabilization results (c) Joint subspace stabilization results Figure 2. [sent-334, score-1.741]

78 Stabilizing each view independently causes vertical disparities and drastically disturbs the horizontal disparities, as shown in (b). [sent-337, score-0.483]

79 the left and right view and thus be able to provide more feature trajectories for the robust statistics of the stabilization results. [sent-338, score-0.893]

80 We then compute the horizontal and vertical disparities and their second derivatives. [sent-342, score-0.46]

81 They were only used to evaluate the stabilization results. [sent-344, score-0.537]

82 We found that our method is able to produce stereoscopic video with smaller vertical disparities. [sent-345, score-0.784]

83 In particular, the average vertical disparity by the separate subspace method × is 0. [sent-346, score-0.45]

84 The average vertical disparities of the separate subspace method and our method are 1. [sent-352, score-0.663]

85 The average second derivatives of horizontal disparities of the input videos and the output videos from both methods are 0. [sent-355, score-0.54]

86 Since the horizontal disparities are inversely proportional to the depth and the jitters of the camera motion in depth is often small, so the jitters in most feature trajectories are small except for scene points close to the camera. [sent-357, score-0.806]

87 We find that both the separate subspace andjoint subspace method reduce it from 1. [sent-359, score-0.484]

88 That is, we did not include the 9 videos that failed the separate subspace method and had no stabilization results. [sent-363, score-0.917]

89 These results show that our joint subspace method can successfully avoid introducing extra vertical disparities and smooth horizontal disparities. [sent-364, score-0.756]

90 The reason that the separate subspace method does not produce significantly large vertical disparities or the second disparity derivatives is that these 48 videos have abundant long feature trajectories and thus can be handled by the separate subspace method, as discussed in Section 3. [sent-365, score-1.359]

91 In particular, it is able to produce high-quality stabilization results without computing a 3D reconstruction and it is robust, efficient, and allows a streaming implementation. [sent-384, score-0.596]

92 However, our method still requires enough long feature trajectories for matrix factorization and has difficulties to handle videos with dominating scene motion, excessive shake, or strong motion blur, such as the examples shown in Figure 3. [sent-386, score-0.501]

93 Conclusion This paper presented a joint subspace stabilization method for stereoscopic video. [sent-390, score-1.36]

94 We proved that the subspace constraint is valid for stereoscopic video. [sent-391, score-0.797]

95 We showed that we can stabilize stereoscopic video without any explicit correspondence computation. [sent-393, score-0.736]

96 Moreover, our method is more robust to the presence of short trajectories than the monocular subspace stabilization method. [sent-394, score-1.04]

97 We validate our method on a variety of stereoscopic videos with different scene content and camera motion. [sent-395, score-0.718]

98 Our experiments show that our method is able to achieve high-quality stereoscopic video stabilization in a robust and efficient way. [sent-396, score-1.232]

99 Auto-directed video stabilization with robust l1 optimal camera paths. [sent-467, score-0.73]

100 3d cinematography principles and their applications to stereoscopic media processing. [sent-505, score-0.565]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('stereoscopic', 0.565), ('stabilization', 0.537), ('disparities', 0.304), ('subspace', 0.214), ('trajectories', 0.192), ('video', 0.13), ('xr', 0.113), ('monocular', 0.097), ('disparity', 0.091), ('vertical', 0.089), ('trajectory', 0.081), ('factorization', 0.073), ('videos', 0.071), ('horizontal', 0.067), ('stereo', 0.064), ('camera', 0.063), ('perspective', 0.061), ('yr', 0.061), ('left', 0.061), ('reconstruction', 0.059), ('xir', 0.059), ('displacement', 0.059), ('cr', 0.057), ('separate', 0.056), ('right', 0.053), ('cl', 0.052), ('xl', 0.051), ('rolling', 0.049), ('klt', 0.046), ('joint', 0.044), ('stabilized', 0.044), ('shutter', 0.044), ('shaky', 0.043), ('matrix', 0.041), ('instantaneous', 0.04), ('motion', 0.04), ('failed', 0.039), ('affine', 0.038), ('gleicher', 0.038), ('smooth', 0.038), ('rank', 0.038), ('ur', 0.035), ('vr', 0.035), ('subspaces', 0.032), ('prove', 0.031), ('cameras', 0.03), ('smoothed', 0.03), ('jin', 0.03), ('yl', 0.03), ('proposition', 0.029), ('ccrl', 0.029), ('jitters', 0.029), ('xlyl', 0.029), ('xryr', 0.029), ('yrzr', 0.029), ('yuzhen', 0.029), ('assemble', 0.028), ('watch', 0.028), ('focal', 0.028), ('rendering', 0.027), ('feature', 0.027), ('smoothing', 0.027), ('shake', 0.027), ('viewing', 0.027), ('derivatives', 0.027), ('equation', 0.026), ('yri', 0.025), ('frame', 0.025), ('yli', 0.023), ('correspondence', 0.023), ('examined', 0.023), ('view', 0.023), ('ul', 0.023), ('errors', 0.023), ('share', 0.022), ('stabilizing', 0.022), ('xil', 0.022), ('vl', 0.022), ('jitter', 0.021), ('zr', 0.021), ('damage', 0.021), ('rig', 0.021), ('fl', 0.021), ('cause', 0.021), ('track', 0.02), ('matrices', 0.02), ('fps', 0.02), ('guided', 0.02), ('excessive', 0.02), ('zl', 0.02), ('scene', 0.019), ('separately', 0.019), ('go', 0.018), ('long', 0.018), ('constraint', 0.018), ('stabilize', 0.018), ('depth', 0.018), ('composing', 0.018), ('adobe', 0.017), ('worked', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.000001 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video

Author: Feng Liu, Yuzhen Niu, Hailin Jin

Abstract: Shaky stereoscopic video is not only unpleasant to watch but may also cause 3D fatigue. Stabilizing the left and right view of a stereoscopic video separately using a monocular stabilization method tends to both introduce undesirable vertical disparities and damage horizontal disparities, which may destroy the stereoscopic viewing experience. In this paper, we present a joint subspace stabilization method for stereoscopic video. We prove that the low-rank subspace constraint for monocular video [10] also holds for stereoscopic video. Particularly, the feature trajectories from the left and right video share the same subspace. Based on this proof, we develop a stereo subspace stabilization method that jointly computes a common subspace from the left and right video and uses it to stabilize the two videos simultaneously. Our method meets the stereoscopic constraints without 3D reconstruction or explicit left-right correspondence. We test our method on a variety of stereoscopic videos with different scene content and camera motion. The experiments show that our method achieves high-quality stabilization for stereoscopic video in a robust and efficient way.

2 0.21168929 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies

Author: Karteek Alahari, Guillaume Seguin, Josef Sivic, Ivan Laptev

Abstract: We seek to obtain a pixel-wise segmentation and pose estimation of multiple people in a stereoscopic video. This involves challenges such as dealing with unconstrained stereoscopic video, non-stationary cameras, and complex indoor and outdoor dynamic scenes. The contributions of our work are two-fold: First, we develop a segmentation model incorporating person detection, pose estimation, as well as colour, motion, and disparity cues. Our new model explicitly represents depth ordering and occlusion. Second, we introduce a stereoscopic dataset with frames extracted from feature-length movies “StreetDance 3D ” and “Pina ”. The dataset contains 2727 realistic stereo pairs and includes annotation of human poses, person bounding boxes, and pixel-wise segmentations for hundreds of people. The dataset is composed of indoor and outdoor scenes depicting multiple people with frequent occlusions. We demonstrate results on our new challenging dataset, as well as on the H2view dataset from (Sheasby et al. ACCV 2012).

3 0.1849965 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

Author: Ali Elqursh, Ahmed Elgammal

Abstract: The vast majority of work on motion segmentation adopts the affine camera model due to its simplicity. Under the affine model, the motion segmentation problem becomes that of subspace separation. Due to this assumption, such methods are mainly offline and exhibit poor performance when the assumption is not satisfied. This is made evident in state-of-the-art methods that relax this assumption by using piecewise affine spaces and spectral clustering techniques to achieve better results. In this paper, we formulate the problem of motion segmentation as that of manifold separation. We then show how label propagation can be used in an online framework to achieve manifold separation. The performance of our framework is evaluated on a benchmark dataset and achieves competitive performance while being online.

4 0.1648258 361 iccv-2013-Robust Trajectory Clustering for Motion Segmentation

Author: Feng Shi, Zhong Zhou, Jiangjian Xiao, Wei Wu

Abstract: Due to occlusions and objects ’ non-rigid deformation in the scene, the obtained motion trajectories from common trackers may contain a number of missing or mis-associated entries. To cluster such corrupted point based trajectories into multiple motions is still a hard problem. In this paper, we present an approach that exploits temporal and spatial characteristics from tracked points to facilitate segmentation of incomplete and corrupted trajectories, thereby obtain highly robust results against severe data missing and noises. Our method first uses the Discrete Cosine Transform (DCT) bases as a temporal smoothness constraint on trajectory projection to ensure the validity of resulting components to repair pathological trajectories. Then, based on an observation that the trajectories of foreground and background in a scene may have different spatial distributions, we propose a two-stage clustering strategy that first performs foreground-background separation then segments remaining foreground trajectories. We show that, with this new clustering strategy, sequences with complex motions can be accurately segmented by even using a simple trans- lational model. Finally, a series of experiments on Hopkins 155 dataset andBerkeley motion segmentation dataset show the advantage of our method over other state-of-the-art motion segmentation algorithms in terms of both effectiveness and robustness.

5 0.15337862 39 iccv-2013-Action Recognition with Improved Trajectories

Author: Heng Wang, Cordelia Schmid

Abstract: Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results onfour challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.

6 0.14310683 68 iccv-2013-Camera Alignment Using Trajectory Intersections in Unsynchronized Videos

7 0.13915941 439 iccv-2013-Video Co-segmentation for Meaningful Action Extraction

8 0.13557179 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

9 0.13545506 232 iccv-2013-Latent Space Sparse Subspace Clustering

10 0.12611754 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

11 0.11195546 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera

12 0.098052748 304 iccv-2013-PM-Huber: PatchMatch with Huber Regularization for Stereo Matching

13 0.097668134 363 iccv-2013-Rolling Shutter Stereo

14 0.094418727 182 iccv-2013-GOSUS: Grassmannian Online Subspace Updates with Structured-Sparsity

15 0.088872932 263 iccv-2013-Measuring Flow Complexity in Videos

16 0.082414873 32 iccv-2013-A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration

17 0.081224322 216 iccv-2013-Inferring "Dark Matter" and "Dark Energy" from Videos

18 0.080737077 58 iccv-2013-Bayesian 3D Tracking from Monocular Video

19 0.078701399 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation

20 0.074709594 122 iccv-2013-Distributed Low-Rank Subspace Segmentation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.147), (1, -0.087), (2, -0.015), (3, 0.129), (4, -0.079), (5, 0.103), (6, 0.039), (7, 0.043), (8, 0.149), (9, 0.107), (10, 0.075), (11, 0.01), (12, -0.036), (13, -0.019), (14, -0.068), (15, -0.048), (16, -0.02), (17, 0.023), (18, 0.018), (19, 0.069), (20, -0.097), (21, -0.018), (22, -0.048), (23, 0.144), (24, 0.028), (25, 0.035), (26, -0.061), (27, -0.079), (28, 0.034), (29, 0.071), (30, -0.034), (31, -0.036), (32, -0.008), (33, -0.058), (34, 0.091), (35, -0.035), (36, -0.061), (37, -0.038), (38, 0.014), (39, -0.035), (40, -0.008), (41, 0.019), (42, -0.043), (43, -0.0), (44, 0.052), (45, 0.017), (46, -0.049), (47, -0.059), (48, 0.002), (49, 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93193656 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video

Author: Feng Liu, Yuzhen Niu, Hailin Jin

Abstract: Shaky stereoscopic video is not only unpleasant to watch but may also cause 3D fatigue. Stabilizing the left and right view of a stereoscopic video separately using a monocular stabilization method tends to both introduce undesirable vertical disparities and damage horizontal disparities, which may destroy the stereoscopic viewing experience. In this paper, we present a joint subspace stabilization method for stereoscopic video. We prove that the low-rank subspace constraint for monocular video [10] also holds for stereoscopic video. Particularly, the feature trajectories from the left and right video share the same subspace. Based on this proof, we develop a stereo subspace stabilization method that jointly computes a common subspace from the left and right video and uses it to stabilize the two videos simultaneously. Our method meets the stereoscopic constraints without 3D reconstruction or explicit left-right correspondence. We test our method on a variety of stereoscopic videos with different scene content and camera motion. The experiments show that our method achieves high-quality stabilization for stereoscopic video in a robust and efficient way.

2 0.71032459 361 iccv-2013-Robust Trajectory Clustering for Motion Segmentation

Author: Feng Shi, Zhong Zhou, Jiangjian Xiao, Wei Wu

Abstract: Due to occlusions and objects ’ non-rigid deformation in the scene, the obtained motion trajectories from common trackers may contain a number of missing or mis-associated entries. To cluster such corrupted point based trajectories into multiple motions is still a hard problem. In this paper, we present an approach that exploits temporal and spatial characteristics from tracked points to facilitate segmentation of incomplete and corrupted trajectories, thereby obtain highly robust results against severe data missing and noises. Our method first uses the Discrete Cosine Transform (DCT) bases as a temporal smoothness constraint on trajectory projection to ensure the validity of resulting components to repair pathological trajectories. Then, based on an observation that the trajectories of foreground and background in a scene may have different spatial distributions, we propose a two-stage clustering strategy that first performs foreground-background separation then segments remaining foreground trajectories. We show that, with this new clustering strategy, sequences with complex motions can be accurately segmented by even using a simple trans- lational model. Finally, a series of experiments on Hopkins 155 dataset andBerkeley motion segmentation dataset show the advantage of our method over other state-of-the-art motion segmentation algorithms in terms of both effectiveness and robustness.

3 0.69477898 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

Author: Ali Elqursh, Ahmed Elgammal

Abstract: The vast majority of work on motion segmentation adopts the affine camera model due to its simplicity. Under the affine model, the motion segmentation problem becomes that of subspace separation. Due to this assumption, such methods are mainly offline and exhibit poor performance when the assumption is not satisfied. This is made evident in state-of-the-art methods that relax this assumption by using piecewise affine spaces and spectral clustering techniques to achieve better results. In this paper, we formulate the problem of motion segmentation as that of manifold separation. We then show how label propagation can be used in an online framework to achieve manifold separation. The performance of our framework is evaluated on a benchmark dataset and achieves competitive performance while being online.

4 0.65660763 68 iccv-2013-Camera Alignment Using Trajectory Intersections in Unsynchronized Videos

Author: Thomas Kuo, Santhoshkumar Sunderrajan, B.S. Manjunath

Abstract: This paper addresses the novel and challenging problem of aligning camera views that are unsynchronized by low and/or variable frame rates using object trajectories. Unlike existing trajectory-based alignment methods, our method does not require frame-to-frame synchronization. Instead, we propose using the intersections of corresponding object trajectories to match views. To find these intersections, we introduce a novel trajectory matching algorithm based on matching Spatio-Temporal Context Graphs (STCGs). These graphs represent the distances between trajectories in time and space within a view, and are matched to an STCG from another view to find the corresponding trajectories. To the best of our knowledge, this is one of the first attempts to align views that are unsynchronized with variable frame rates. The results on simulated and real-world datasets show trajectory intersections area viablefeatureforcamera alignment, and that the trajectory matching method performs well in real-world scenarios.

5 0.64261043 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

Author: Zhuwen Li, Jiaming Guo, Loong-Fah Cheong, Steven Zhiying Zhou

Abstract: This paper addresses real-world challenges in the motion segmentation problem, including perspective effects, missing data, and unknown number of motions. It first formulates the 3-D motion segmentation from two perspective views as a subspace clustering problem, utilizing the epipolar constraint of an image pair. It then combines the point correspondence information across multiple image frames via a collaborative clustering step, in which tight integration is achieved via a mixed norm optimization scheme. For model selection, wepropose an over-segment and merge approach, where the merging step is based on the property of the ?1-norm ofthe mutual sparse representation oftwo oversegmented groups. The resulting algorithm can deal with incomplete trajectories and perspective effects substantially better than state-of-the-art two-frame and multi-frame methods. Experiments on a 62-clip dataset show the significant superiority of the proposed idea in both segmentation accuracy and model selection.

6 0.56206936 28 iccv-2013-A Rotational Stereo Model Based on XSlit Imaging

7 0.55888015 264 iccv-2013-Minimal Basis Facility Location for Subspace Segmentation

8 0.55359495 263 iccv-2013-Measuring Flow Complexity in Videos

9 0.53204787 39 iccv-2013-Action Recognition with Improved Trajectories

10 0.53111571 216 iccv-2013-Inferring "Dark Matter" and "Dark Energy" from Videos

11 0.53064072 439 iccv-2013-Video Co-segmentation for Meaningful Action Extraction

12 0.52810532 304 iccv-2013-PM-Huber: PatchMatch with Huber Regularization for Stereo Matching

13 0.48090053 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies

14 0.47069469 363 iccv-2013-Rolling Shutter Stereo

15 0.46024826 232 iccv-2013-Latent Space Sparse Subspace Clustering

16 0.45503914 402 iccv-2013-Street View Motion-from-Structure-from-Motion

17 0.43872949 397 iccv-2013-Space-Time Tradeoffs in Photo Sequencing

18 0.43447384 301 iccv-2013-Optimal Orthogonal Basis and Image Assimilation: Motion Modeling

19 0.43346775 32 iccv-2013-A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration

20 0.43316215 252 iccv-2013-Line Assisted Light Field Triangulation and Stereo Matching


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.041), (7, 0.021), (12, 0.015), (26, 0.048), (27, 0.013), (31, 0.051), (35, 0.012), (42, 0.098), (55, 0.228), (64, 0.05), (73, 0.061), (89, 0.202), (97, 0.012), (98, 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.86231107 224 iccv-2013-Joint Optimization for Consistent Multiple Graph Matching

Author: Junchi Yan, Yu Tian, Hongyuan Zha, Xiaokang Yang, Ya Zhang, Stephen M. Chu

Abstract: The problem of graph matching in general is NP-hard and approaches have been proposed for its suboptimal solution, most focusing on finding the one-to-one node mapping between two graphs. A more general and challenging problem arises when one aims to find consistent mappings across a number of graphs more than two. Conventional graph pair matching methods often result in mapping inconsistency since the mapping between two graphs can either be determined by pair mapping or by an additional anchor graph. To address this issue, a novel formulation is derived which is maximized via alternating optimization. Our method enjoys several advantages: 1) the mappings are jointly optimized rather than sequentially performed by applying pair matching, allowing the global affinity information across graphs can be propagated and explored; 2) the number of concerned variables to optimize is in linear with the number of graphs, being superior to local pair matching resulting in O(n2) variables; 3) the mapping consistency constraints are analytically satisfied during optimization; and 4) off-the-shelf graph pair matching solvers can be reused under the proposed framework in an ‘out-of-thebox’ fashion. Competitive results on both the synthesized data and the real data are reported, by varying the level of deformation, outliers and edge densities. ∗Corresponding author. The work is supported by NSF IIS1116886, NSF IIS-1049694, NSFC 61129001/F010403 and the 111 Project (B07022). Yu Tian Shanghai Jiao Tong University Shanghai, China, 200240 yut ian @ s j tu . edu .cn Xiaokang Yang Shanghai Jiao Tong University Shanghai, China, 200240 xkyang@ s j tu .edu . cn Stephen M. Chu IBM T.J. Waston Research Center Yorktown Heights, NY USA, 10598 s chu @u s . ibm . com

2 0.83645505 32 iccv-2013-A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration

Author: Maxime Meilland, Tom Drummond, Andrew I. Comport

Abstract: Motion blur and rolling shutter deformations both inhibit visual motion registration, whether it be due to a moving sensor or a moving target. Whilst both deformations exist simultaneously, no models have been proposed to handle them together. Furthermore, neither deformation has been consideredpreviously in the context of monocularfullimage 6 degrees of freedom registration or RGB-D structure and motion. As will be shown, rolling shutter deformation is observed when a camera moves faster than a single pixel in parallax between subsequent scan-lines. Blur is a function of the pixel exposure time and the motion vector. In this paper a complete dense 3D registration model will be derived to accountfor both motion blur and rolling shutter deformations simultaneously. Various approaches will be compared with respect to ground truth and live real-time performance will be demonstratedfor complex scenarios where both blur and shutter deformations are dominant.

3 0.82834095 183 iccv-2013-Geometric Registration Based on Distortion Estimation

Author: Wei Zeng, Mayank Goswami, Feng Luo, Xianfeng Gu

Abstract: Surface registration plays a fundamental role in many applications in computer vision and aims at finding a oneto-one correspondence between surfaces. Conformal mapping based surface registration methods conformally map 2D/3D surfaces onto 2D canonical domains and perform the matching on the 2D plane. This registration framework reduces dimensionality, and the result is intrinsic to Riemannian metric and invariant under isometric deformation. However, conformal mapping will be affected by inconsistent boundaries and non-isometric deformations of surfaces. In this work, we quantify the effects of boundary variation and non-isometric deformation to conformal mappings, and give the theoretical upper bounds for the distortions of conformal mappings under these two factors. Besides giving the thorough theoretical proofs of the theorems, we verified them by concrete experiments using 3D human facial scans with dynamic expressions and varying boundaries. Furthermore, we used the distortion estimates for reducing search range in feature matching of surface registration applications. The experimental results are consistent with the theoreticalpredictions and also demonstrate the performance improvements in feature tracking.

4 0.82223302 363 iccv-2013-Rolling Shutter Stereo

Author: Olivier Saurer, Kevin Köser, Jean-Yves Bouguet, Marc Pollefeys

Abstract: A huge fraction of cameras used nowadays is based on CMOS sensors with a rolling shutter that exposes the image line by line. For dynamic scenes/cameras this introduces undesired effects like stretch, shear and wobble. It has been shown earlier that rotational shake induced rolling shutter effects in hand-held cell phone capture can be compensated based on an estimate of the camera rotation. In contrast, we analyse the case of significant camera motion, e.g. where a bypassing streetlevel capture vehicle uses a rolling shutter camera in a 3D reconstruction framework. The introduced error is depth dependent and cannot be compensated based on camera motion/rotation alone, invalidating also rectification for stereo camera systems. On top, significant lens distortion as often present in wide angle cameras intertwines with rolling shutter effects as it changes the time at which a certain 3D point is seen. We show that naive 3D reconstructions (assuming global shutter) will deliver biased geometry already for very mild assumptions on vehicle speed and resolution. We then develop rolling shutter dense multiview stereo algorithms that solve for time of exposure and depth at the same time, even in the presence of lens distortion and perform an evaluation on ground truth laser scan models as well as on real street-level data.

same-paper 5 0.82196164 226 iccv-2013-Joint Subspace Stabilization for Stereoscopic Video

Author: Feng Liu, Yuzhen Niu, Hailin Jin

Abstract: Shaky stereoscopic video is not only unpleasant to watch but may also cause 3D fatigue. Stabilizing the left and right view of a stereoscopic video separately using a monocular stabilization method tends to both introduce undesirable vertical disparities and damage horizontal disparities, which may destroy the stereoscopic viewing experience. In this paper, we present a joint subspace stabilization method for stereoscopic video. We prove that the low-rank subspace constraint for monocular video [10] also holds for stereoscopic video. Particularly, the feature trajectories from the left and right video share the same subspace. Based on this proof, we develop a stereo subspace stabilization method that jointly computes a common subspace from the left and right video and uses it to stabilize the two videos simultaneously. Our method meets the stereoscopic constraints without 3D reconstruction or explicit left-right correspondence. We test our method on a variety of stereoscopic videos with different scene content and camera motion. The experiments show that our method achieves high-quality stabilization for stereoscopic video in a robust and efficient way.

6 0.81434929 23 iccv-2013-A New Image Quality Metric for Image Auto-denoising

7 0.80362922 406 iccv-2013-Style-Aware Mid-level Representation for Discovering Visual Connections in Space and Time

8 0.79420519 312 iccv-2013-Perceptual Fidelity Aware Mean Squared Error

9 0.77818209 144 iccv-2013-Estimating the 3D Layout of Indoor Scenes and Its Clutter from Depth Sensors

10 0.72729975 60 iccv-2013-Bayesian Robust Matrix Factorization for Image and Video Processing

11 0.72480255 300 iccv-2013-Optical Flow via Locally Adaptive Fusion of Complementary Data Costs

12 0.72467291 196 iccv-2013-Hierarchical Data-Driven Descent for Efficient Optimal Deformation Estimation

13 0.72420126 57 iccv-2013-BOLD Features to Detect Texture-less Objects

14 0.72327763 358 iccv-2013-Robust Non-parametric Data Fitting for Correspondence Modeling

15 0.72296959 35 iccv-2013-Accurate Blur Models vs. Image Priors in Single Image Super-resolution

16 0.72233605 65 iccv-2013-Breaking the Chain: Liberation from the Temporal Markov Assumption for Tracking Human Poses

17 0.72226703 314 iccv-2013-Perspective Motion Segmentation via Collaborative Clustering

18 0.72210217 349 iccv-2013-Regionlets for Generic Object Detection

19 0.72186899 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera

20 0.72185767 82 iccv-2013-Compensating for Motion during Direct-Global Separation