cvpr cvpr2013 cvpr2013-285 knowledge-graph by maker-knowledge-mining

285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking


Source: pdf

Author: Junseok Kwon, Kyoung Mu Lee

Abstract: We propose a novel tracking algorithm that robustly tracks the target by finding the state which minimizes uncertainty of the likelihood at current state. The uncertainty of the likelihood is estimated by obtaining the gap between the lower and upper bounds of the likelihood. By minimizing the gap between the two bounds, our method finds the confident and reliable state of the target. In the paper, the state that gives the Minimum Uncertainty Gap (MUG) between likelihood bounds is shown to be more reliable than the state which gives the maximum likelihood only, especially when there are severe illumination changes, occlusions, and pose variations. A rigorous derivation of the lower and upper bounds of the likelihood for the visual tracking problem is provided to address this issue. Additionally, an efficient inference algorithm using Interacting Markov Chain Monte Carlo is presented to find the best state that maximizes the average of the lower and upper bounds of the likelihood and minimizes the gap between two bounds simultaneously. Experimental results demonstrate that our method successfully tracks the target in realistic videos and outperforms conventional tracking methods.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract We propose a novel tracking algorithm that robustly tracks the target by finding the state which minimizes uncertainty of the likelihood at current state. [sent-2, score-1.243]

2 The uncertainty of the likelihood is estimated by obtaining the gap between the lower and upper bounds of the likelihood. [sent-3, score-1.07]

3 By minimizing the gap between the two bounds, our method finds the confident and reliable state of the target. [sent-4, score-0.472]

4 In the paper, the state that gives the Minimum Uncertainty Gap (MUG) between likelihood bounds is shown to be more reliable than the state which gives the maximum likelihood only, especially when there are severe illumination changes, occlusions, and pose variations. [sent-5, score-1.293]

5 A rigorous derivation of the lower and upper bounds of the likelihood for the visual tracking problem is provided to address this issue. [sent-6, score-1.008]

6 Additionally, an efficient inference algorithm using Interacting Markov Chain Monte Carlo is presented to find the best state that maximizes the average of the lower and upper bounds of the likelihood and minimizes the gap between two bounds simultaneously. [sent-7, score-1.701]

7 Experimental results demonstrate that our method successfully tracks the target in realistic videos and outperforms conventional tracking methods. [sent-8, score-0.606]

8 Introduction The objective of the tracking problem is to track the target accurately in the real environments [3, 5, 6, 8, 9, 11, 13, 15, 21, 3 1, 35, 36]. [sent-10, score-0.536]

9 For robust tracking, most conventional tracking methods formulate the tracking problem by the Bayesian framework [10, 20, 18, 26, 27, 28, 33, 34]. [sent-11, score-0.525]

10 In the Bayesian tracking approach, the goal of the tracking problem is to find the best state, which maximizes the posterior probability p(Xt |Y1:t). [sent-12, score-0.575]

11 To obtain the MAP state, the method searches for the state that maximizes the likelihood p(Yt |Xt), which is near the previous state as a prior. [sent-14, score-0.792]

12 In this case, |thXe likelihood is typically calculated by measuring the similarity between the observation Yt at the state Xt and . [sent-15, score-0.434]

13 (c) How our method efficiently employs infinite number of target models? [sent-137, score-0.282]

14 (a) The likelihood bounds (uncertainty) are formed inevitably in real tracking situations due to different target models that are employed via different updating strategies during the tracking process. [sent-140, score-1.327]

15 (b) A large gap between the upper and lower bounds indicates that the corresponding state gives very different answers (likelihoods) depending on the target models used (red and blue), although the average likelihood obtained by using set of all target models (green) is high. [sent-141, score-1.692]

16 That means the likelihood estimation over that state is uncertain and unreliable. [sent-142, score-0.459]

17 So, our method tries to find the state that has minimum gap (uncertainty), which gives consistent answers (likelihoods), regardless of the target models. [sent-143, score-0.721]

18 And by maximizing the average likelihood bound at the same time, our method gets the state, which confidently maximizes the likelihood. [sent-144, score-0.417]

19 (c) The proposed method only compares two target models with observations while utilizing the infinite number of target models, which generate lower and upper bounds of the likelihood. [sent-145, score-1.032]

20 In this case, the MAP estimation assumes that the best state produces the highest likelihood score near by the previous state. [sent-148, score-0.434]

21 However, in the real-world scenario, this as- sumption is not valid unless the target model Mt is always correct. [sent-149, score-0.276]

22 In practice, the target model is easily corrupted and distorted during the online update. [sent-150, score-0.295]

23 To deal with severe appearance changes, many tracking algorithms evolve the target model with online update. [sent-151, score-0.6]

24 Our method successfully tracks the target using the MUG estimation, whereas the conventional methods fail to track it using the MAP estimation. [sent-177, score-0.398]

25 The MUG estimation finds the true state A of the target because the gap between the likelihood bounds in State A is smaller than that in State B. [sent-178, score-1.294]

26 tracking error, the target model includes more background and becomes erroneous as time goes on. [sent-180, score-0.501]

27 This drift phenomenon frequently occurs even though the methods find the optimal MAP state because of the noisy target model. [sent-182, score-0.528]

28 According to this fundamental inherent problem, the conventional MAP-based tracking approach need to be reconsidered. [sent-183, score-0.282]

29 Thus, in this paper, we redefine the goal oftracking problem as to find the best state that maximizes the average bound of the likelihood and, at the same time, minimizes the gap between bounds of the likelihood. [sent-184, score-1.288]

30 Note that in general tracking problem, the upper and lower bounds or the uncertainties of the likelihoods are naturally formed since many different likelihoods are made by different target models that are the reference appearances of the target. [sent-186, score-1.135]

31 The different target models are usually constructed due to the different updating strategies during the tracking process [23]. [sent-187, score-0.501]

32 Specially when there exist severe occlusions, illumination changes, and so on, the likelihood uncertainty becomes larger, as empirically demonstrated in Fig. [sent-188, score-0.42]

33 This is because the distractors such as occlusions and illumination changes usually make the target models to be much different with each other. [sent-190, score-0.392]

34 Since the different likelihoods can be generated by different target models, obtaining the likelihood bounds is the same as considering all possible target models that could be constructed. [sent-191, score-1.158]

35 Using the likelihood bounds, the proposed method can find the good target state because it implicitly covers all possible appearance changes of the target with all possible target models. [sent-192, score-1.265]

36 1(b), the large gap between the upper and lower bounds indicates that the corresponding state can have either a very good likelihood or a very bad likelihood depending on the employed target model. [sent-194, score-1.634]

37 In this case, the likelihood estimation over the state is easily affected by the noisy target models and the estimated likelihood is uncertain and not reliable. [sent-195, score-0.942]

38 Hence, by minimizing the gap between the two likelihood bounds, the proposed method can find the confident state of the target. [sent-196, score-0.654]

39 MUG is also affected by aforementioned distractors (outliers) in the target model. [sent-197, score-0.308]

40 To measure the confidence of the likelihood, our method estimates the lower and upper bounds of the likelihood, minimizes the gap between the bounds, and accurately tracks the target, as shown in Fig. [sent-202, score-0.888]

41 In tracking methods using multiple target models, the VTS tracker [19] tracked the target successfully with the visual tracker sampler framework. [sent-204, score-0.946]

42 The MIL tracker [2] solved the ambiguity of the target appearance using multiple instances of the target model and robustly tracked the target with the online multiple instance learning algorithm. [sent-205, score-0.965]

43 Compared with these works which utilized a relatively small number of target models, our method implicitly employs all possible target models with the likelihood bounds. [sent-206, score-0.765]

44 In tracking methods with the likelihood bound, the L1BPR tracker [24] proposed an efficient L1 tracker with the Bounded Particle Re-sampling (BPR) technique which considers the upper bound of the likelihood. [sent-207, score-0.808]

45 Our method utilizes the likelihood bounds to measure uncertainty of the likelihood and enhances the accuracy of visual tracking. [sent-209, score-0.959]

46 In sampling based tracking methods, the particle filter [12] handled the non-Gaussianity of the target distribution in the tracking problems. [sent-210, score-0.821]

47 However, our method uses the samples to obtain the state that minimizes the uncertainty of the target distribution, whereas the samples in the other methods are used to obtain the maximum of the target distribution. [sent-214, score-0.956]

48 The second contribution is a rigorous derivation of the lower and upper bounds of the likelihood in the visual tracking problem. [sent-217, score-1.008]

49 Although the bounds of the likelihood are obtained based on [14], applying those bounds into the visual tracking problem directly is not trivial since proper and careful 222333555644 designs of the parameter γ and the distribution q are needed for the visual tracking problem. [sent-218, score-1.476]

50 In this work, γ and q are designed as the hyper parameter of the likelihood and the prior distribution of a target model, respectively. [sent-219, score-0.563]

51 Our method constructs two chains and inferences the best state on the chains using the IMCMC method in [19]. [sent-221, score-0.407]

52 In the first chain, the proposed method finds the state that maximizes the average bound (mean of the lower and upper bounds) of the likelihood. [sent-222, score-0.602]

53 In the second chain, the method searches for the state that minimizes the gap between two likelihood bounds. [sent-223, score-0.798]

54 These chains communicate with each other to obtain the best state that maximizes the average bound and minimizes the gap between bounds at the same time. [sent-224, score-1.162]

55 In (1), our method finds the state that maximizes the average bound [pu (Yt |Xt) + pl (Yt |Xt)] and minimizes the gap between bounds,| X[pu (Yt |Xt) − pl (Yt |Xt)], at the same time. [sent-227, score-0.802]

56 To obtain the MUG state, we need to estimate the lower and upper bounds of the likelihood. [sent-229, score-0.516]

57 As aforementioned, the main cause of the tracking failures is the noisy target models. [sent-239, score-0.501]

58 Therefore, our method integrates out the target model θ in (2) and estimates the log marginal likelihood: ? [sent-240, score-0.282]

59 XTo approximate the integral numerically, we obtain the mathematical lower (Jensen’s inequality) and mathematical upper bounds (Gibbs’ inequality) of the marginal likelihood based on [14]. [sent-243, score-0.765]

60 Θp(θ|Yt,Xt)lnpq((Yθ|tγ,,θX|Xt)t)dθ, (4) where q(θ|γ, Xt) is the prior distribution ofthe target model θw ahnerde γ (isθ t|γhe, hyper parameter of the distribution. [sent-246, score-0.338]

61 Because θ is marginalized out in (3)(4), the lower and upper bounds of the likelihood is the function of Xt and γ, which are a state and a parameter, respectively. [sent-247, score-0.95]

62 Then, the goal of our method is to find both the best state and parameter, which reduce gap between the likelihood bounds. [sent-248, score-0.635]

63 , I{n our method, the parameter is not set empirically but is obtained analytically to maximize the lower bound in (3) and to minimize the upper bound in (4). [sent-250, score-0.39]

64 In addition, it prevents the best state with the minimum uncertainty gap from having a low likelihood score. [sent-256, score-0.791]

65 Our method simultaneously searches states that minimize uncertainty of the likelihood estimation by decreasing the numerator in (1). [sent-257, score-0.445]

66 Then, the method can avoid outliers which have a large uncertainty gap, even though they have high likelihood scores. [sent-258, score-0.371]

67 Learning γu for the Upper Bound We learn the best parameter γu which minimizes the upper bound (4): γu= argmin lnpu(Yt |Xt, γ). [sent-263, score-0.339]

68 imum of the upper bound of the likelihood at the state Xt in (4) is estimated based on [14]: lnpu(Yt|Xt,γu) ≈Z1i=? [sent-312, score-0.644]

69 Learning γl for the Lower Bound We learn the best parameter γl which maximizes the lower bound in (3): γl= argmax lnpl (Yt |Xt, γ). [sent-316, score-0.388]

70 (11) 1 To find the parameter γl = (μl , σl) that satisfies (10), the quasi-optimized lower bound is estimated by Stochastic Approximation Monte Carlo (SAMC) in [22] to define the recursive approximation of the solution of ddγ lnpl (Yt |Xt, γ) = 0. [sent-323, score-0.321]

71 Then, the final estimate of the lower bound at the state Xt in (3) is estimated based on [14]: lnpl(Yt|Xt,γl) ≈Z1i=? [sent-334, score-0.363]

72 The first chain frequently accepts the state that maximizes the average likelihood bound. [sent-339, score-0.666]

73 The second frequently accepts the state that minimizes the gap between the bounds. [sent-340, score-0.578]

74 At the proposal step, a new state is proposed by the proposal density function. [sent-343, score-0.275]

75 Given the proposed state, each chain decides whether the state is accepted or not with the acceptance ratio in the acceptance step: ap1= min? [sent-345, score-0.357]

76 132495)w1h03762i(c84 ) MCIVTFRAGTMILVTSMTTMUG the chain finds the state that maximizes the average bound and minimizes the gap between two bounds simultaneously. [sent-354, score-1.184]

77 The best state in IMCMC is obtained by employing two chains, in which one chain only searches for the state that maximizes the average bound and the other only searches for the state that minimizes the gap between two bounds. [sent-355, score-1.321]

78 As indicated in Table 2, using two chains shows better tracking performance because the tracking methods using a single chain get trapped in local optima more frequently as the target distribution becomes complex. [sent-357, score-0.977]

79 The target distribution is complex because the different two types of the likelihood distribution are mixed in a single distribution. [sent-358, score-0.529]

80 Our method divides a complex distribution into two simple ones using IMCMC, where two distributions describe the average bound and the gap between two bounds, respectively. [sent-359, score-0.327]

81 Our method was robust to the geometric appearance changes of the non-rigid target in diving, high-jump, and skater; the occlusions in bird1, and woman; and the motion blur in bird2. [sent-364, score-0.359]

82 In this paper, we wanted to demonstrate that our method can produce better tracking results by utilizing a very simple likelihood function and its lower and upper bounds. [sent-365, score-0.626]

83 The tracking performance of our method can be further enhanced if more advanced likelihood functions are employed. [sent-367, score-0.468]

84 Additionally, with the simple likelihood function, our method produced more accurate tracking results than other state-of-the-art methods, where they are robust to the pose variations, occlusions, and illumination changes. [sent-368, score-0.494]

85 5, the tracking performance under the severe occlusions and background clutters was tested. [sent-422, score-0.328]

86 When the sequence contained several appearance changes of the target at the same time, our method robustly tracked the target over time, while other tracking methods frequently missed the targets. [sent-423, score-0.917]

87 The tracking results of MIL drifted into the background when the aforementioned changes transformed the target appearance into a different one. [sent-424, score-0.58]

88 Our method overcame this problem and successfully tracked the target by evaluating the target configuration with several likelihoods. [sent-425, score-0.573]

89 6, our method did not miss the target in all frames, although the sequences include the severe geometric appearance changes of the target. [sent-427, score-0.356]

90 On the other hand, MIL and VTS frequently failed to track the target when the target was severely deformed. [sent-428, score-0.584]

91 Our method was more efficient than VTS in terms of the computational cost because it utilized two likelihoods only for evaluating the configuration by estimating the lower and upper bounds of the likelihood. [sent-429, score-0.575]

92 Conclusion In this paper, we propose a novel tracking framework that tracks the target robustly by finding the best state of the target, which minimizes the gap between the lower and upper bounds of the likelihood. [sent-431, score-1.607]

93 Obtaining the likelihood bounds is the same as considering all possible target models during the tracking process. [sent-432, score-1.084]

94 Therefore, our method finds the good state of the target by reflecting pearance changes of the target. [sent-433, score-0.546]

95 all possible ap- The experimental results demonstrate that our method outperforms the conventional tracking methods using the MAP and ML estimation. [sent-434, score-0.282]

96 The method also shows better tracking performances than those of the state-of-the-art tracking methods when there are illumination, occlusions, and deformation. [sent-435, score-0.486]

97 Tracking results with lower and upper bounds of the likelihood obtained by MUG. [sent-668, score-0.741]

98 Yellow and red curves represent lower and upper bounds of the likelihood over time in MUG, respectively. [sent-670, score-0.741]

99 Green curve represents gap between the bounds over time in MUG. [sent-671, score-0.559]

100 MCMC-based particle filtering for tracking a variable number of interacting targets. [sent-703, score-0.321]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('xt', 0.399), ('bounds', 0.358), ('mug', 0.291), ('target', 0.258), ('tracking', 0.243), ('yt', 0.242), ('likelihood', 0.225), ('state', 0.209), ('gap', 0.201), ('imcmc', 0.158), ('xxtt', 0.138), ('uncertainty', 0.128), ('lnpl', 0.119), ('yytt', 0.119), ('upper', 0.107), ('vts', 0.105), ('bound', 0.103), ('minimizes', 0.103), ('chains', 0.099), ('maximizes', 0.089), ('chain', 0.078), ('kwon', 0.076), ('xxt', 0.073), ('tracker', 0.065), ('searches', 0.06), ('lnpq', 0.059), ('lnpu', 0.059), ('likelihoods', 0.059), ('mil', 0.059), ('monte', 0.058), ('dist', 0.056), ('lower', 0.051), ('carlo', 0.051), ('occlusions', 0.044), ('tracks', 0.043), ('finds', 0.043), ('pu', 0.042), ('interacting', 0.042), ('severe', 0.041), ('bpr', 0.04), ('pll', 0.04), ('ppuu', 0.04), ('samc', 0.04), ('conventional', 0.039), ('qq', 0.038), ('online', 0.037), ('changes', 0.036), ('particle', 0.036), ('track', 0.035), ('acceptance', 0.035), ('robustly', 0.034), ('tracked', 0.034), ('frequently', 0.033), ('proposal', 0.033), ('accepts', 0.032), ('states', 0.032), ('hyper', 0.031), ('seoul', 0.029), ('moment', 0.029), ('drift', 0.028), ('markov', 0.028), ('snu', 0.028), ('distractors', 0.028), ('minimum', 0.028), ('pl', 0.027), ('illumination', 0.026), ('parameter', 0.026), ('confidence', 0.025), ('answers', 0.025), ('uncertain', 0.025), ('uu', 0.024), ('marginal', 0.024), ('tx', 0.024), ('employs', 0.024), ('rigorous', 0.024), ('hsv', 0.023), ('utilizes', 0.023), ('bounded', 0.023), ('successfully', 0.023), ('distribution', 0.023), ('saffari', 0.023), ('pami', 0.022), ('aforementioned', 0.022), ('iz', 0.022), ('purple', 0.022), ('satisfies', 0.022), ('map', 0.022), ('grabner', 0.021), ('appearance', 0.021), ('parallel', 0.021), ('confident', 0.019), ('yellow', 0.019), ('stochastic', 0.018), ('sampling', 0.018), ('outliers', 0.018), ('sumption', 0.018), ('ppll', 0.018), ('lnqp', 0.018), ('apnd', 0.018), ('fragt', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking

Author: Junseok Kwon, Kyoung Mu Lee

Abstract: We propose a novel tracking algorithm that robustly tracks the target by finding the state which minimizes uncertainty of the likelihood at current state. The uncertainty of the likelihood is estimated by obtaining the gap between the lower and upper bounds of the likelihood. By minimizing the gap between the two bounds, our method finds the confident and reliable state of the target. In the paper, the state that gives the Minimum Uncertainty Gap (MUG) between likelihood bounds is shown to be more reliable than the state which gives the maximum likelihood only, especially when there are severe illumination changes, occlusions, and pose variations. A rigorous derivation of the lower and upper bounds of the likelihood for the visual tracking problem is provided to address this issue. Additionally, an efficient inference algorithm using Interacting Markov Chain Monte Carlo is presented to find the best state that maximizes the average of the lower and upper bounds of the likelihood and minimizes the gap between two bounds simultaneously. Experimental results demonstrate that our method successfully tracks the target in realistic videos and outperforms conventional tracking methods.

2 0.31168634 324 cvpr-2013-Part-Based Visual Tracking with Online Latent Structural Learning

Author: Rui Yao, Qinfeng Shi, Chunhua Shen, Yanning Zhang, Anton van_den_Hengel

Abstract: Despite many advances made in the area, deformable targets and partial occlusions continue to represent key problems in visual tracking. Structured learning has shown good results when applied to tracking whole targets, but applying this approach to a part-based target model is complicated by the need to model the relationships between parts, and to avoid lengthy initialisation processes. We thus propose a method which models the unknown parts using latent variables. In doing so we extend the online algorithm pegasos to the structured prediction case (i.e., predicting the location of the bounding boxes) with latent part variables. To better estimate the parts, and to avoid over-fitting caused by the extra model complexity/capacity introduced by theparts, wepropose a two-stage trainingprocess, based on the primal rather than the dual form. We then show that the method outperforms the state-of-the-art (linear and non-linear kernel) trackers.

3 0.21471965 457 cvpr-2013-Visual Tracking via Locality Sensitive Histograms

Author: Shengfeng He, Qingxiong Yang, Rynson W.H. Lau, Jiang Wang, Ming-Hsuan Yang

Abstract: This paper presents a novel locality sensitive histogram algorithm for visual tracking. Unlike the conventional image histogram that counts the frequency of occurrences of each intensity value by adding ones to the corresponding bin, a locality sensitive histogram is computed at each pixel location and a floating-point value is added to the corresponding bin for each occurrence of an intensity value. The floating-point value declines exponentially with respect to the distance to the pixel location where the histogram is computed; thus every pixel is considered but those that are far away can be neglected due to the very small weights assigned. An efficient algorithm is proposed that enables the locality sensitive histograms to be computed in time linear in the image size and the number of bins. A robust tracking framework based on the locality sensitive histograms is proposed, which consists of two main components: a new feature for tracking that is robust to illumination changes and a novel multi-region tracking algorithm that runs in realtime even with hundreds of regions. Extensive experiments demonstrate that the proposed tracking framework outper- , forms the state-of-the-art methods in challenging scenarios, especially when the illumination changes dramatically.

4 0.19901234 314 cvpr-2013-Online Object Tracking: A Benchmark

Author: Yi Wu, Jongwoo Lim, Ming-Hsuan Yang

Abstract: Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.

5 0.18724956 267 cvpr-2013-Least Soft-Threshold Squares Tracking

Author: Dong Wang, Huchuan Lu, Ming-Hsuan Yang

Abstract: In this paper, we propose a generative tracking method based on a novel robust linear regression algorithm. In contrast to existing methods, the proposed Least Soft-thresold Squares (LSS) algorithm models the error term with the Gaussian-Laplacian distribution, which can be solved efficiently. Based on maximum joint likelihood of parameters, we derive a LSS distance to measure the difference between an observation sample and the dictionary. Compared with the distance derived from ordinary least squares methods, the proposed metric is more effective in dealing with outliers. In addition, we present an update scheme to capture the appearance change of the tracked target and ensure that the model is properly updated. Experimental results on several challenging image sequences demonstrate that the proposed tracker achieves more favorable performance than the state-of-the-art methods.

6 0.18394245 386 cvpr-2013-Self-Paced Learning for Long-Term Tracking

7 0.1393895 15 cvpr-2013-A Lazy Man's Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration

8 0.12292521 414 cvpr-2013-Structure Preserving Object Tracking

9 0.11723139 199 cvpr-2013-Harry Potter's Marauder's Map: Localizing and Tracking Multiple Persons-of-Interest by Nonnegative Discretization

10 0.11715834 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

11 0.11553168 209 cvpr-2013-Hypergraphs for Joint Multi-view Reconstruction and Multi-object Tracking

12 0.11530626 274 cvpr-2013-Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

13 0.11319489 439 cvpr-2013-Tracking Human Pose by Tracking Symmetric Parts

14 0.10595355 224 cvpr-2013-Information Consensus for Distributed Multi-target Tracking

15 0.10408522 55 cvpr-2013-Background Modeling Based on Bidirectional Analysis

16 0.097440481 419 cvpr-2013-Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

17 0.096899576 331 cvpr-2013-Physically Plausible 3D Scene Tracking: The Single Actor Hypothesis

18 0.096276544 387 cvpr-2013-Semi-supervised Domain Adaptation with Instance Constraints

19 0.088362463 249 cvpr-2013-Learning Compact Binary Codes for Visual Tracking

20 0.086966768 121 cvpr-2013-Detection- and Trajectory-Level Exclusion in Multiple Object Tracking


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.151), (1, 0.008), (2, -0.031), (3, -0.067), (4, 0.019), (5, -0.031), (6, 0.159), (7, -0.135), (8, 0.097), (9, 0.205), (10, -0.069), (11, -0.096), (12, -0.136), (13, 0.098), (14, -0.103), (15, -0.048), (16, 0.0), (17, -0.032), (18, 0.088), (19, -0.016), (20, 0.033), (21, -0.039), (22, -0.023), (23, -0.013), (24, -0.024), (25, 0.028), (26, -0.041), (27, -0.026), (28, 0.011), (29, 0.048), (30, 0.017), (31, 0.004), (32, -0.029), (33, 0.016), (34, -0.114), (35, -0.082), (36, -0.045), (37, 0.028), (38, -0.066), (39, -0.026), (40, 0.046), (41, 0.032), (42, 0.009), (43, 0.019), (44, -0.077), (45, -0.037), (46, 0.043), (47, -0.016), (48, -0.053), (49, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98654461 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking

Author: Junseok Kwon, Kyoung Mu Lee

Abstract: We propose a novel tracking algorithm that robustly tracks the target by finding the state which minimizes uncertainty of the likelihood at current state. The uncertainty of the likelihood is estimated by obtaining the gap between the lower and upper bounds of the likelihood. By minimizing the gap between the two bounds, our method finds the confident and reliable state of the target. In the paper, the state that gives the Minimum Uncertainty Gap (MUG) between likelihood bounds is shown to be more reliable than the state which gives the maximum likelihood only, especially when there are severe illumination changes, occlusions, and pose variations. A rigorous derivation of the lower and upper bounds of the likelihood for the visual tracking problem is provided to address this issue. Additionally, an efficient inference algorithm using Interacting Markov Chain Monte Carlo is presented to find the best state that maximizes the average of the lower and upper bounds of the likelihood and minimizes the gap between two bounds simultaneously. Experimental results demonstrate that our method successfully tracks the target in realistic videos and outperforms conventional tracking methods.

2 0.80973256 457 cvpr-2013-Visual Tracking via Locality Sensitive Histograms

Author: Shengfeng He, Qingxiong Yang, Rynson W.H. Lau, Jiang Wang, Ming-Hsuan Yang

Abstract: This paper presents a novel locality sensitive histogram algorithm for visual tracking. Unlike the conventional image histogram that counts the frequency of occurrences of each intensity value by adding ones to the corresponding bin, a locality sensitive histogram is computed at each pixel location and a floating-point value is added to the corresponding bin for each occurrence of an intensity value. The floating-point value declines exponentially with respect to the distance to the pixel location where the histogram is computed; thus every pixel is considered but those that are far away can be neglected due to the very small weights assigned. An efficient algorithm is proposed that enables the locality sensitive histograms to be computed in time linear in the image size and the number of bins. A robust tracking framework based on the locality sensitive histograms is proposed, which consists of two main components: a new feature for tracking that is robust to illumination changes and a novel multi-region tracking algorithm that runs in realtime even with hundreds of regions. Extensive experiments demonstrate that the proposed tracking framework outper- , forms the state-of-the-art methods in challenging scenarios, especially when the illumination changes dramatically.

3 0.80259418 314 cvpr-2013-Online Object Tracking: A Benchmark

Author: Yi Wu, Jongwoo Lim, Ming-Hsuan Yang

Abstract: Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.

4 0.78819674 267 cvpr-2013-Least Soft-Threshold Squares Tracking

Author: Dong Wang, Huchuan Lu, Ming-Hsuan Yang

Abstract: In this paper, we propose a generative tracking method based on a novel robust linear regression algorithm. In contrast to existing methods, the proposed Least Soft-thresold Squares (LSS) algorithm models the error term with the Gaussian-Laplacian distribution, which can be solved efficiently. Based on maximum joint likelihood of parameters, we derive a LSS distance to measure the difference between an observation sample and the dictionary. Compared with the distance derived from ordinary least squares methods, the proposed metric is more effective in dealing with outliers. In addition, we present an update scheme to capture the appearance change of the tracked target and ensure that the model is properly updated. Experimental results on several challenging image sequences demonstrate that the proposed tracker achieves more favorable performance than the state-of-the-art methods.

5 0.74000961 324 cvpr-2013-Part-Based Visual Tracking with Online Latent Structural Learning

Author: Rui Yao, Qinfeng Shi, Chunhua Shen, Yanning Zhang, Anton van_den_Hengel

Abstract: Despite many advances made in the area, deformable targets and partial occlusions continue to represent key problems in visual tracking. Structured learning has shown good results when applied to tracking whole targets, but applying this approach to a part-based target model is complicated by the need to model the relationships between parts, and to avoid lengthy initialisation processes. We thus propose a method which models the unknown parts using latent variables. In doing so we extend the online algorithm pegasos to the structured prediction case (i.e., predicting the location of the bounding boxes) with latent part variables. To better estimate the parts, and to avoid over-fitting caused by the extra model complexity/capacity introduced by theparts, wepropose a two-stage trainingprocess, based on the primal rather than the dual form. We then show that the method outperforms the state-of-the-art (linear and non-linear kernel) trackers.

6 0.69924718 386 cvpr-2013-Self-Paced Learning for Long-Term Tracking

7 0.67387527 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

8 0.65465015 414 cvpr-2013-Structure Preserving Object Tracking

9 0.65190172 224 cvpr-2013-Information Consensus for Distributed Multi-target Tracking

10 0.62293339 209 cvpr-2013-Hypergraphs for Joint Multi-view Reconstruction and Multi-object Tracking

11 0.55546153 199 cvpr-2013-Harry Potter's Marauder's Map: Localizing and Tracking Multiple Persons-of-Interest by Nonnegative Discretization

12 0.54895198 249 cvpr-2013-Learning Compact Binary Codes for Visual Tracking

13 0.5483036 121 cvpr-2013-Detection- and Trajectory-Level Exclusion in Multiple Object Tracking

14 0.53365374 274 cvpr-2013-Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

15 0.52553993 301 cvpr-2013-Multi-target Tracking by Rank-1 Tensor Approximation

16 0.52411395 331 cvpr-2013-Physically Plausible 3D Scene Tracking: The Single Actor Hypothesis

17 0.46331739 440 cvpr-2013-Tracking People and Their Objects

18 0.44139987 308 cvpr-2013-Nonlinearly Constrained MRFs: Exploring the Intrinsic Dimensions of Higher-Order Cliques

19 0.43753514 143 cvpr-2013-Efficient Large-Scale Structured Learning

20 0.43623543 439 cvpr-2013-Tracking Human Pose by Tracking Symmetric Parts


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.168), (16, 0.01), (26, 0.061), (27, 0.011), (33, 0.225), (39, 0.02), (67, 0.079), (69, 0.053), (75, 0.193), (87, 0.081)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.86641562 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking

Author: Junseok Kwon, Kyoung Mu Lee

Abstract: We propose a novel tracking algorithm that robustly tracks the target by finding the state which minimizes uncertainty of the likelihood at current state. The uncertainty of the likelihood is estimated by obtaining the gap between the lower and upper bounds of the likelihood. By minimizing the gap between the two bounds, our method finds the confident and reliable state of the target. In the paper, the state that gives the Minimum Uncertainty Gap (MUG) between likelihood bounds is shown to be more reliable than the state which gives the maximum likelihood only, especially when there are severe illumination changes, occlusions, and pose variations. A rigorous derivation of the lower and upper bounds of the likelihood for the visual tracking problem is provided to address this issue. Additionally, an efficient inference algorithm using Interacting Markov Chain Monte Carlo is presented to find the best state that maximizes the average of the lower and upper bounds of the likelihood and minimizes the gap between two bounds simultaneously. Experimental results demonstrate that our method successfully tracks the target in realistic videos and outperforms conventional tracking methods.

2 0.82903928 248 cvpr-2013-Learning Collections of Part Models for Object Recognition

Author: Ian Endres, Kevin J. Shih, Johnston Jiaa, Derek Hoiem

Abstract: We propose a method to learn a diverse collection of discriminative parts from object bounding box annotations. Part detectors can be trained and applied individually, which simplifies learning and extension to new features or categories. We apply the parts to object category detection, pooling part detections within bottom-up proposed regions and using a boosted classifier with proposed sigmoid weak learners for scoring. On PASCAL VOC 2010, we evaluate the part detectors ’ ability to discriminate and localize annotated keypoints. Our detection system is competitive with the best-existing systems, outperforming other HOG-based detectors on the more deformable categories.

3 0.82541454 414 cvpr-2013-Structure Preserving Object Tracking

Author: Lu Zhang, Laurens van_der_Maaten

Abstract: Model-free trackers can track arbitrary objects based on a single (bounding-box) annotation of the object. Whilst the performance of model-free trackers has recently improved significantly, simultaneously tracking multiple objects with similar appearance remains very hard. In this paper, we propose a new multi-object model-free tracker (based on tracking-by-detection) that resolves this problem by incorporating spatial constraints between the objects. The spatial constraints are learned along with the object detectors using an online structured SVM algorithm. The experimental evaluation ofour structure-preserving object tracker (SPOT) reveals significant performance improvements in multi-object tracking. We also show that SPOT can improve the performance of single-object trackers by simultaneously tracking different parts of the object.

4 0.82194972 294 cvpr-2013-Multi-class Video Co-segmentation with a Generative Multi-video Model

Author: Wei-Chen Chiu, Mario Fritz

Abstract: Video data provides a rich source of information that is available to us today in large quantities e.g. from online resources. Tasks like segmentation benefit greatly from the analysis of spatio-temporal motion patterns in videos and recent advances in video segmentation has shown great progress in exploiting these addition cues. However, observing a single video is often not enough to predict meaningful segmentations and inference across videos becomes necessary in order to predict segmentations that are consistent with objects classes. Therefore the task of video cosegmentation is being proposed, that aims at inferring segmentation from multiple videos. But current approaches are limited to only considering binary foreground/background -inf .mpg . de segmentation and multiple videos of the same object. This is a clear mismatch to the challenges that we are facing with videos from online resources or consumer videos. We propose to study multi-class video co-segmentation where the number of object classes is unknown as well as the number of instances in each frame and video. We achieve this by formulating a non-parametric bayesian model across videos sequences that is based on a new videos segmentation prior as well as a global appearance model that links segments of the same class. We present the first multi-class video co-segmentation evaluation. We show that our method is applicable to real video data from online resources and outperforms state-of-the-art video segmentation and image co-segmentation baselines.

5 0.81997055 314 cvpr-2013-Online Object Tracking: A Benchmark

Author: Yi Wu, Jongwoo Lim, Ming-Hsuan Yang

Abstract: Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.

6 0.81990302 324 cvpr-2013-Part-Based Visual Tracking with Online Latent Structural Learning

7 0.81495976 408 cvpr-2013-Spatiotemporal Deformable Part Models for Action Detection

8 0.8147915 325 cvpr-2013-Part Discovery from Partial Correspondence

9 0.81416577 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation

10 0.81347531 400 cvpr-2013-Single Image Calibration of Multi-axial Imaging Systems

11 0.81308413 136 cvpr-2013-Discriminatively Trained And-Or Tree Models for Object Detection

12 0.81082797 311 cvpr-2013-Occlusion Patterns for Object Class Detection

13 0.80976099 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

14 0.80931467 154 cvpr-2013-Explicit Occlusion Modeling for 3D Object Class Representations

15 0.80831802 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

16 0.80831504 186 cvpr-2013-GeoF: Geodesic Forests for Learning Coupled Predictors

17 0.80654186 331 cvpr-2013-Physically Plausible 3D Scene Tracking: The Single Actor Hypothesis

18 0.80628979 104 cvpr-2013-Deep Convolutional Network Cascade for Facial Point Detection

19 0.80582285 131 cvpr-2013-Discriminative Non-blind Deblurring

20 0.80537176 30 cvpr-2013-Accurate Localization of 3D Objects from RGB-D Data Using Segmentation Hypotheses